status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,855 |
ansible-test fails to mention which interpreter as soon it picks one
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test fails to mention which interpreter as soon it picks one, causing confusing errors where user has no clue why pip install may complain about missing wheel when in fact he has wheel installed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
any
##### STEPS TO REPRODUCE
* assure that you have two python interpreters available, lets say py27 and py36
* remove wheel from python2.7, `pip2.7 uninstall wheel`
* run `ansible-test units --requirements`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
After the expected WARNING around skipping Python 2.6 due to being missing, I would expect to see an INFO message telling me that ansible-test started using next interpreter, "Python 2.7" in the list, so I would know if pip fails which pip it was as it was clearly not the one from default interpreter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
WARNING: Skipping unit tests on Python 2.6 due to missing interpreter.
Could not build wheels for pip, since package 'wheel' is not installed.
Could not build wheels for setuptools, since package 'wheel' is not installed.
Could not build wheels for cryptography, since package 'wheel' is not installed.
Could not build wheels for ipaddress, since package 'wheel' is not installed.
Could not build wheels for six, since package 'wheel' is not installed.
Could not build wheels for cffi, since package 'wheel' is not installed.
....
```
|
https://github.com/ansible/ansible/issues/72855
|
https://github.com/ansible/ansible/pull/80022
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
|
5e3db6e44169aa88cd027f469eea96f1f17fea95
| 2020-12-04T13:07:21Z |
python
| 2023-02-21T01:55:04Z |
changelogs/fragments/ansible-test-requirements-message.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,855 |
ansible-test fails to mention which interpreter as soon it picks one
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
ansible-test fails to mention which interpreter as soon it picks one, causing confusing errors where user has no clue why pip install may complain about missing wheel when in fact he has wheel installed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.2
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.9.0/lib/python3.9/site-packages/ansible
executable location = /Users/ssbarnea/.pyenv/versions/3.9.0/bin/ansible
python version = 3.9.0 (default, Oct 10 2020, 09:43:04) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
any
##### STEPS TO REPRODUCE
* assure that you have two python interpreters available, lets say py27 and py36
* remove wheel from python2.7, `pip2.7 uninstall wheel`
* run `ansible-test units --requirements`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
After the expected WARNING around skipping Python 2.6 due to being missing, I would expect to see an INFO message telling me that ansible-test started using next interpreter, "Python 2.7" in the list, so I would know if pip fails which pip it was as it was clearly not the one from default interpreter.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
WARNING: Skipping unit tests on Python 2.6 due to missing interpreter.
Could not build wheels for pip, since package 'wheel' is not installed.
Could not build wheels for setuptools, since package 'wheel' is not installed.
Could not build wheels for cryptography, since package 'wheel' is not installed.
Could not build wheels for ipaddress, since package 'wheel' is not installed.
Could not build wheels for six, since package 'wheel' is not installed.
Could not build wheels for cffi, since package 'wheel' is not installed.
....
```
|
https://github.com/ansible/ansible/issues/72855
|
https://github.com/ansible/ansible/pull/80022
|
e6cffce0eb58ba54c097f4ce7111bb97e6805051
|
5e3db6e44169aa88cd027f469eea96f1f17fea95
| 2020-12-04T13:07:21Z |
python
| 2023-02-21T01:55:04Z |
test/lib/ansible_test/_internal/python_requirements.py
|
"""Python requirements management"""
from __future__ import annotations
import base64
import dataclasses
import json
import os
import re
import typing as t
from .encoding import (
to_text,
to_bytes,
)
from .io import (
read_text_file,
)
from .util import (
ANSIBLE_TEST_DATA_ROOT,
ANSIBLE_TEST_TARGET_ROOT,
ANSIBLE_TEST_TOOLS_ROOT,
ApplicationError,
SubprocessError,
display,
find_executable,
raw_command,
str_to_version,
version_to_str,
)
from .util_common import (
check_pyyaml,
create_result_directories,
)
from .config import (
EnvironmentConfig,
IntegrationConfig,
UnitsConfig,
)
from .data import (
data_context,
)
from .host_configs import (
PosixConfig,
PythonConfig,
)
from .connections import (
LocalConnection,
Connection,
)
from .coverage_util import (
get_coverage_version,
)
QUIET_PIP_SCRIPT_PATH = os.path.join(ANSIBLE_TEST_TARGET_ROOT, 'setup', 'quiet_pip.py')
REQUIREMENTS_SCRIPT_PATH = os.path.join(ANSIBLE_TEST_TARGET_ROOT, 'setup', 'requirements.py')
# IMPORTANT: Keep this in sync with the ansible-test.txt requirements file.
VIRTUALENV_VERSION = '16.7.12'
# Pip Abstraction
class PipUnavailableError(ApplicationError):
"""Exception raised when pip is not available."""
def __init__(self, python: PythonConfig) -> None:
super().__init__(f'Python {python.version} at "{python.path}" does not have pip available.')
@dataclasses.dataclass(frozen=True)
class PipCommand:
"""Base class for pip commands."""
def serialize(self) -> tuple[str, dict[str, t.Any]]:
"""Return a serialized representation of this command."""
name = type(self).__name__[3:].lower()
return name, self.__dict__
@dataclasses.dataclass(frozen=True)
class PipInstall(PipCommand):
"""Details required to perform a pip install."""
requirements: list[tuple[str, str]]
constraints: list[tuple[str, str]]
packages: list[str]
def has_package(self, name: str) -> bool:
"""Return True if the specified package will be installed, otherwise False."""
name = name.lower()
return (any(name in package.lower() for package in self.packages) or
any(name in contents.lower() for path, contents in self.requirements))
@dataclasses.dataclass(frozen=True)
class PipUninstall(PipCommand):
"""Details required to perform a pip uninstall."""
packages: list[str]
ignore_errors: bool
@dataclasses.dataclass(frozen=True)
class PipVersion(PipCommand):
"""Details required to get the pip version."""
@dataclasses.dataclass(frozen=True)
class PipBootstrap(PipCommand):
"""Details required to bootstrap pip."""
pip_version: str
packages: list[str]
# Entry Points
def install_requirements(
args: EnvironmentConfig,
python: PythonConfig,
ansible: bool = False,
command: bool = False,
coverage: bool = False,
virtualenv: bool = False,
controller: bool = True,
connection: t.Optional[Connection] = None,
) -> None:
"""Install requirements for the given Python using the specified arguments."""
create_result_directories(args)
if not requirements_allowed(args, controller):
return
if command and isinstance(args, (UnitsConfig, IntegrationConfig)) and args.coverage:
coverage = True
cryptography = False
if ansible:
try:
ansible_cache = install_requirements.ansible_cache # type: ignore[attr-defined]
except AttributeError:
ansible_cache = install_requirements.ansible_cache = {} # type: ignore[attr-defined]
ansible_installed = ansible_cache.get(python.path)
if ansible_installed:
ansible = False
else:
ansible_cache[python.path] = True
# Install the latest cryptography version that the current requirements can support if it is not already available.
# This avoids downgrading cryptography when OS packages provide a newer version than we are able to install using pip.
# If not installed here, later install commands may try to install a version of cryptography which cannot be installed.
cryptography = not is_cryptography_available(python.path)
commands = collect_requirements(
python=python,
controller=controller,
ansible=ansible,
cryptography=cryptography,
command=args.command if command else None,
coverage=coverage,
virtualenv=virtualenv,
minimize=False,
sanity=None,
)
if not commands:
return
run_pip(args, python, commands, connection)
# false positive: pylint: disable=no-member
if any(isinstance(command, PipInstall) and command.has_package('pyyaml') for command in commands):
check_pyyaml(python)
def collect_bootstrap(python: PythonConfig) -> list[PipCommand]:
"""Return the details necessary to bootstrap pip into an empty virtual environment."""
infrastructure_packages = get_venv_packages(python)
pip_version = infrastructure_packages['pip']
packages = [f'{name}=={version}' for name, version in infrastructure_packages.items()]
bootstrap = PipBootstrap(
pip_version=pip_version,
packages=packages,
)
return [bootstrap]
def collect_requirements(
python: PythonConfig,
controller: bool,
ansible: bool,
cryptography: bool,
coverage: bool,
virtualenv: bool,
minimize: bool,
command: t.Optional[str],
sanity: t.Optional[str],
) -> list[PipCommand]:
"""Collect requirements for the given Python using the specified arguments."""
commands: list[PipCommand] = []
if virtualenv:
# sanity tests on Python 2.x install virtualenv when it is too old or is not already installed and the `--requirements` option is given
# the last version of virtualenv with no dependencies is used to minimize the changes made outside a virtual environment
commands.extend(collect_package_install(packages=[f'virtualenv=={VIRTUALENV_VERSION}'], constraints=False))
if coverage:
commands.extend(collect_package_install(packages=[f'coverage=={get_coverage_version(python.version).coverage_version}'], constraints=False))
if cryptography:
commands.extend(collect_package_install(packages=get_cryptography_requirements(python)))
if ansible or command:
commands.extend(collect_general_install(command, ansible))
if sanity:
commands.extend(collect_sanity_install(sanity))
if command == 'units':
commands.extend(collect_units_install())
if command in ('integration', 'windows-integration', 'network-integration'):
commands.extend(collect_integration_install(command, controller))
if (sanity or minimize) and any(isinstance(command, PipInstall) for command in commands):
# bootstrap the managed virtual environment, which will have been created without any installed packages
# sanity tests which install no packages skip this step
commands = collect_bootstrap(python) + commands
# most infrastructure packages can be removed from sanity test virtual environments after they've been created
# removing them reduces the size of environments cached in containers
uninstall_packages = list(get_venv_packages(python))
if not minimize:
# installed packages may have run-time dependencies on setuptools
uninstall_packages.remove('setuptools')
commands.extend(collect_uninstall(packages=uninstall_packages))
return commands
def run_pip(
args: EnvironmentConfig,
python: PythonConfig,
commands: list[PipCommand],
connection: t.Optional[Connection],
) -> None:
"""Run the specified pip commands for the given Python, and optionally the specified host."""
connection = connection or LocalConnection(args)
script = prepare_pip_script(commands)
if not args.explain:
try:
connection.run([python.path], data=script, capture=False)
except SubprocessError:
script = prepare_pip_script([PipVersion()])
try:
connection.run([python.path], data=script, capture=True)
except SubprocessError as ex:
if 'pip is unavailable:' in ex.stdout + ex.stderr:
raise PipUnavailableError(python)
raise
# Collect
def collect_general_install(
command: t.Optional[str] = None,
ansible: bool = False,
) -> list[PipInstall]:
"""Return details necessary for the specified general-purpose pip install(s)."""
requirements_paths: list[tuple[str, str]] = []
constraints_paths: list[tuple[str, str]] = []
if ansible:
path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', 'ansible.txt')
requirements_paths.append((ANSIBLE_TEST_DATA_ROOT, path))
if command:
path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', f'{command}.txt')
requirements_paths.append((ANSIBLE_TEST_DATA_ROOT, path))
return collect_install(requirements_paths, constraints_paths)
def collect_package_install(packages: list[str], constraints: bool = True) -> list[PipInstall]:
"""Return the details necessary to install the specified packages."""
return collect_install([], [], packages, constraints=constraints)
def collect_sanity_install(sanity: str) -> list[PipInstall]:
"""Return the details necessary for the specified sanity pip install(s)."""
requirements_paths: list[tuple[str, str]] = []
constraints_paths: list[tuple[str, str]] = []
path = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', f'sanity.{sanity}.txt')
requirements_paths.append((ANSIBLE_TEST_DATA_ROOT, path))
if data_context().content.is_ansible:
path = os.path.join(data_context().content.sanity_path, 'code-smell', f'{sanity}.requirements.txt')
requirements_paths.append((data_context().content.root, path))
return collect_install(requirements_paths, constraints_paths, constraints=False)
def collect_units_install() -> list[PipInstall]:
"""Return details necessary for the specified units pip install(s)."""
requirements_paths: list[tuple[str, str]] = []
constraints_paths: list[tuple[str, str]] = []
path = os.path.join(data_context().content.unit_path, 'requirements.txt')
requirements_paths.append((data_context().content.root, path))
path = os.path.join(data_context().content.unit_path, 'constraints.txt')
constraints_paths.append((data_context().content.root, path))
return collect_install(requirements_paths, constraints_paths)
def collect_integration_install(command: str, controller: bool) -> list[PipInstall]:
"""Return details necessary for the specified integration pip install(s)."""
requirements_paths: list[tuple[str, str]] = []
constraints_paths: list[tuple[str, str]] = []
# Support for prefixed files was added to ansible-test in ansible-core 2.12 when split controller/target testing was implemented.
# Previous versions of ansible-test only recognize non-prefixed files.
# If a prefixed file exists (even if empty), it takes precedence over the non-prefixed file.
prefixes = ('controller.' if controller else 'target.', '')
for prefix in prefixes:
path = os.path.join(data_context().content.integration_path, f'{prefix}requirements.txt')
if os.path.exists(path):
requirements_paths.append((data_context().content.root, path))
break
for prefix in prefixes:
path = os.path.join(data_context().content.integration_path, f'{command}.{prefix}requirements.txt')
if os.path.exists(path):
requirements_paths.append((data_context().content.root, path))
break
for prefix in prefixes:
path = os.path.join(data_context().content.integration_path, f'{prefix}constraints.txt')
if os.path.exists(path):
constraints_paths.append((data_context().content.root, path))
break
return collect_install(requirements_paths, constraints_paths)
def collect_install(
requirements_paths: list[tuple[str, str]],
constraints_paths: list[tuple[str, str]],
packages: t.Optional[list[str]] = None,
constraints: bool = True,
) -> list[PipInstall]:
"""Build a pip install list from the given requirements, constraints and packages."""
# listing content constraints first gives them priority over constraints provided by ansible-test
constraints_paths = list(constraints_paths)
if constraints:
constraints_paths.append((ANSIBLE_TEST_DATA_ROOT, os.path.join(ANSIBLE_TEST_DATA_ROOT, 'requirements', 'constraints.txt')))
requirements = [(os.path.relpath(path, root), read_text_file(path)) for root, path in requirements_paths if usable_pip_file(path)]
constraints = [(os.path.relpath(path, root), read_text_file(path)) for root, path in constraints_paths if usable_pip_file(path)]
packages = packages or []
if requirements or packages:
installs = [PipInstall(
requirements=requirements,
constraints=constraints,
packages=packages,
)]
else:
installs = []
return installs
def collect_uninstall(packages: list[str], ignore_errors: bool = False) -> list[PipUninstall]:
"""Return the details necessary for the specified pip uninstall."""
uninstall = PipUninstall(
packages=packages,
ignore_errors=ignore_errors,
)
return [uninstall]
# Support
def get_venv_packages(python: PythonConfig) -> dict[str, str]:
"""Return a dictionary of Python packages needed for a consistent virtual environment specific to the given Python version."""
# NOTE: This same information is needed for building the base-test-container image.
# See: https://github.com/ansible/base-test-container/blob/main/files/installer.py
default_packages = dict(
pip='21.3.1',
setuptools='60.8.2',
wheel='0.37.1',
)
override_packages = {
'2.7': dict(
pip='20.3.4', # 21.0 requires Python 3.6+
setuptools='44.1.1', # 45.0.0 requires Python 3.5+
wheel=None,
),
'3.5': dict(
pip='20.3.4', # 21.0 requires Python 3.6+
setuptools='50.3.2', # 51.0.0 requires Python 3.6+
wheel=None,
),
'3.6': dict(
pip='21.3.1', # 22.0 requires Python 3.7+
setuptools='59.6.0', # 59.7.0 requires Python 3.7+
wheel=None,
),
}
packages = {name: version or default_packages[name] for name, version in override_packages.get(python.version, default_packages).items()}
return packages
def requirements_allowed(args: EnvironmentConfig, controller: bool) -> bool:
"""
Return True if requirements can be installed, otherwise return False.
Requirements are only allowed if one of the following conditions is met:
The user specified --requirements manually.
The install will occur on the controller and the controller or controller Python is managed by ansible-test.
The install will occur on the target and the target or target Python is managed by ansible-test.
"""
if args.requirements:
return True
if controller:
return args.controller.is_managed or args.controller.python.is_managed
target = args.only_targets(PosixConfig)[0]
return target.is_managed or target.python.is_managed
def prepare_pip_script(commands: list[PipCommand]) -> str:
"""Generate a Python script to perform the requested pip commands."""
data = [command.serialize() for command in commands]
display.info(f'>>> Requirements Commands\n{json.dumps(data, indent=4)}', verbosity=3)
args = dict(
script=read_text_file(QUIET_PIP_SCRIPT_PATH),
verbosity=display.verbosity,
commands=data,
)
payload = to_text(base64.b64encode(to_bytes(json.dumps(args))))
path = REQUIREMENTS_SCRIPT_PATH
template = read_text_file(path)
script = template.format(payload=payload)
display.info(f'>>> Python Script from Template ({path})\n{script.strip()}', verbosity=4)
return script
def usable_pip_file(path: t.Optional[str]) -> bool:
"""Return True if the specified pip file is usable, otherwise False."""
return bool(path) and os.path.exists(path) and bool(os.path.getsize(path))
# Cryptography
def is_cryptography_available(python: str) -> bool:
"""Return True if cryptography is available for the given python."""
try:
raw_command([python, '-c', 'import cryptography'], capture=True)
except SubprocessError:
return False
return True
def get_cryptography_requirements(python: PythonConfig) -> list[str]:
"""
Return the correct cryptography and pyopenssl requirements for the given python version.
The version of cryptography installed depends on the python version and openssl version.
"""
openssl_version = get_openssl_version(python)
if openssl_version and openssl_version < (1, 1, 0):
# cryptography 3.2 requires openssl 1.1.x or later
# see https://cryptography.io/en/latest/changelog.html#v3-2
cryptography = 'cryptography < 3.2'
# pyopenssl 20.0.0 requires cryptography 3.2 or later
pyopenssl = 'pyopenssl < 20.0.0'
else:
# cryptography 3.4+ builds require a working rust toolchain
# systems bootstrapped using ansible-core-ci can access additional wheels through the spare-tire package index
cryptography = 'cryptography'
# any future installation of pyopenssl is free to use any compatible version of cryptography
pyopenssl = ''
requirements = [
cryptography,
pyopenssl,
]
requirements = [requirement for requirement in requirements if requirement]
return requirements
def get_openssl_version(python: PythonConfig) -> t.Optional[tuple[int, ...]]:
"""Return the openssl version."""
if not python.version.startswith('2.'):
# OpenSSL version checking only works on Python 3.x.
# This should be the most accurate, since it is the Python we will be using.
version = json.loads(raw_command([python.path, os.path.join(ANSIBLE_TEST_TOOLS_ROOT, 'sslcheck.py')], capture=True)[0])['version']
if version:
display.info(f'Detected OpenSSL version {version_to_str(version)} under Python {python.version}.', verbosity=1)
return tuple(version)
# Fall back to detecting the OpenSSL version from the CLI.
# This should provide an adequate solution on Python 2.x.
openssl_path = find_executable('openssl', required=False)
if openssl_path:
try:
result = raw_command([openssl_path, 'version'], capture=True)[0]
except SubprocessError:
result = ''
match = re.search(r'^OpenSSL (?P<version>[0-9]+\.[0-9]+\.[0-9]+)', result)
if match:
version = str_to_version(match.group('version'))
display.info(f'Detected OpenSSL version {version_to_str(version)} using the openssl CLI.', verbosity=1)
return version
display.info('Unable to detect OpenSSL version.', verbosity=1)
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,838 |
Cannot use `name` as an option for a lookup plugin
|
### Summary
When trying to use an option called `name` for a lookup plugin, an error is thrown.
I believe this is because `Templar._query_lookup()` has it's own parameter called `name` [here](https://github.com/ansible/ansible/blob/44dcfde9b84177e7dfede11ab287789c577b82b5/lib/ansible/template/__init__.py#L813).
### Issue Type
Bug Report
### Component Name
ansible/lib/ansible/template/__init__.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.0 (main, Oct 24 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora 37
### Steps to Reproduce
Call any lookup plugin with `name` (even if it doesn't support it).
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- debug:
msg: "{{ q('list', name='test') }}"
```
### Expected Results
A lookup plugin can have an option called `name`.
### Actual Results
```console
fatal: [localhost]: FAILED! => {
"msg": "Unexpected templating type error occurred on ({{ q('list', name='test') }}): Templar._query_lookup() got multiple values for argument 'name'. Templar._query_lookup() got multiple values for argument 'name'"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79838
|
https://github.com/ansible/ansible/pull/80065
|
9f02e505d94bf402cfcc268efb3ace42dc2de4b7
|
1108c0f33170db99635dca5bf8dc45fabd1dd974
| 2023-01-29T23:23:49Z |
python
| 2023-02-21T22:54:32Z |
changelogs/fragments/79839-lookup-option-name.yml
|
---
bugfixes:
- templates - Fixed `TypeError` when a lookup plugin has an option called `name`.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,936 |
jinja complex type transforms: dict(somelist| slice(2)) doesn't work as documented
|
### Summary
There is a bug in "Complex Type Transformations" (file complex_data_manipulation.rst ):
> These example produces `{"a": "b", "c": "d"}`
> ```
> vars:
> single_list: [ 'a', 'b', 'c', 'd' ]
> mydict: "{{ dict(single_list | slice(2)) }}"
> ```
> Both end up being the same thing, with ``slice(2)`` transforming ``single_list`` to a ``list_of_pairs`` generator.
But (as also pointed out in #15237) the jinja `slice(` filter does not use its argument as the size of the slices, but the number of slices to create:
```sh
$ ansible localhost -m debug -a "msg={{ [ 'a', 'b', 'c', 'd' ] | slice(2) }}"
localhost | SUCCESS => {
"msg": [
[
"a",
"b"
],
[
"c",
"d"
]
]
}
$ ansible localhost -m debug -a "msg={{ [ 'a', 'b', 'c', 'd' , 'e', 'f'] | slice(2) }}"
localhost | SUCCESS => {
"msg": [
[
"a",
"b",
"c"
],
[
"d",
"e",
"f"
]
]
}
```
And thus the `dict()` invocation fails
### Issue Type
Documentation Report
### Component Name
docs/docsite/playbook_guide/complex_data_manipulation.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Fedora 37
### Additional Information
I'm still looking for a way to create a dict
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79936
|
https://github.com/ansible/ansible/pull/80067
|
940fdf5dba268c65859d5c55ab554f735467474e
|
5ad77fc7bb529d9733a17c1ef5d24a84b98f50d3
| 2023-02-07T09:46:11Z |
python
| 2023-02-23T15:27:17Z |
docs/docsite/rst/playbook_guide/complex_data_manipulation.rst
|
.. _complex_data_manipulation:
Manipulating data
#################
In many cases, you need to do some complex operation with your variables, while Ansible is not recommended as a data processing/manipulation tool, you can use the existing Jinja2 templating in conjunction with the many added Ansible filters, lookups and tests to do some very complex transformations.
Let's start with a quick definition of each type of plugin:
- lookups: Mainly used to query 'external data', in Ansible these were the primary part of loops using the ``with_<lookup>`` construct, but they can be used independently to return data for processing. They normally return a list due to their primary function in loops as mentioned previously. Used with the ``lookup`` or ``query`` Jinja2 operators.
- filters: used to change/transform data, used with the ``|`` Jinja2 operator.
- tests: used to validate data, used with the ``is`` Jinja2 operator.
.. _note:
* Some tests and filters are provided directly by Jinja2, so their availability depends on the Jinja2 version, not Ansible.
.. _for_loops_or_list_comprehensions:
Loops and list comprehensions
=============================
Most programming languages have loops (``for``, ``while``, and so on) and list comprehensions to do transformations on lists including lists of objects. Jinja2 has a few filters that provide this functionality: ``map``, ``select``, ``reject``, ``selectattr``, ``rejectattr``.
- map: this is a basic for loop that just allows you to change every item in a list, using the 'attribute' keyword you can do the transformation based on attributes of the list elements.
- select/reject: this is a for loop with a condition, that allows you to create a subset of a list that matches (or not) based on the result of the condition.
- selectattr/rejectattr: very similar to the above but it uses a specific attribute of the list elements for the conditional statement.
.. _exponential_backoff:
Use a loop to create exponential backoff for retries/until.
.. code-block:: yaml
- name: retry ping 10 times with exponential backoff delay
ping:
retries: 10
delay: '{{item|int}}'
loop: '{{ range(1, 10)|map("pow", 2) }}'
.. _keys_from_dict_matching_list:
Extract keys from a dictionary matching elements from a list
------------------------------------------------------------
The Python equivalent code would be:
.. code-block:: python
chains = [1, 2]
for chain in chains:
for config in chains_config[chain]['configs']:
print(config['type'])
There are several ways to do it in Ansible, this is just one example:
.. code-block:: YAML+Jinja
:emphasize-lines: 4
:caption: Way to extract matching keys from a list of dictionaries
tasks:
- name: Show extracted list of keys from a list of dictionaries
ansible.builtin.debug:
msg: "{{ chains | map('extract', chains_config) | map(attribute='configs') | flatten | map(attribute='type') | flatten }}"
vars:
chains: [1, 2]
chains_config:
1:
foo: bar
configs:
- type: routed
version: 0.1
- type: bridged
version: 0.2
2:
foo: baz
configs:
- type: routed
version: 1.0
- type: bridged
version: 1.1
.. code-block:: ansible-output
:caption: Results of debug task, a list with the extracted keys
ok: [localhost] => {
"msg": [
"routed",
"bridged",
"routed",
"bridged"
]
}
.. code-block:: YAML+Jinja
:caption: Get the unique list of values of a variable that vary per host
vars:
unique_value_list: "{{ groups['all'] | map ('extract', hostvars, 'varname') | list | unique}}"
.. _find_mount_point:
Find mount point
----------------
In this case, we want to find the mount point for a given path across our machines, since we already collect mount facts, we can use the following:
.. code-block:: YAML+Jinja
:caption: Use selectattr to filter mounts into list I can then sort and select the last from
:emphasize-lines: 8
- hosts: all
gather_facts: True
vars:
path: /var/lib/cache
tasks:
- name: The mount point for {{path}}, found using the Ansible mount facts, [-1] is the same as the 'last' filter
ansible.builtin.debug:
msg: "{{(ansible_facts.mounts | selectattr('mount', 'in', path) | list | sort(attribute='mount'))[-1]['mount']}}"
.. _omit_elements_from_list:
Omit elements from a list
-------------------------
The special ``omit`` variable ONLY works with module options, but we can still use it in other ways as an identifier to tailor a list of elements:
.. code-block:: YAML+Jinja
:caption: Inline list filtering when feeding a module option
:emphasize-lines: 3, 6
- name: Enable a list of Windows features, by name
ansible.builtin.set_fact:
win_feature_list: "{{ namestuff | reject('equalto', omit) | list }}"
vars:
namestuff:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
Another way is to avoid adding elements to the list in the first place, so you can just use it directly:
.. code-block:: YAML+Jinja
:caption: Using set_fact in a loop to increment a list conditionally
:emphasize-lines: 3, 4, 6
- name: Build unique list with some items conditionally omitted
ansible.builtin.set_fact:
namestuff: ' {{ (namestuff | default([])) | union([item]) }}'
when: item != omit
loop:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
.. _combine_optional_values:
Combine values from same list of dicts
---------------------------------------
Combining positive and negative filters from examples above, you can get a 'value when it exists' and a 'fallback' when it doesn't.
.. code-block:: YAML+Jinja
:caption: Use selectattr and rejectattr to get the ansible_host or inventory_hostname as needed
- hosts: localhost
tasks:
- name: Check hosts in inventory that respond to ssh port
wait_for:
host: "{{ item }}"
port: 22
loop: '{{ has_ah + no_ah }}'
vars:
has_ah: '{{ hostvars|dictsort|selectattr("1.ansible_host", "defined")|map(attribute="1.ansible_host")|list }}'
no_ah: '{{ hostvars|dictsort|rejectattr("1.ansible_host", "defined")|map(attribute="0")|list }}'
.. _custom_fileglob_variable:
Custom Fileglob Based on a Variable
-----------------------------------
This example uses `Python argument list unpacking <https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists>`_ to create a custom list of fileglobs based on a variable.
.. code-block:: YAML+Jinja
:caption: Using fileglob with a list based on a variable.
- hosts: all
vars:
mygroups:
- prod
- web
tasks:
- name: Copy a glob of files based on a list of groups
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
loop: '{{ q("fileglob", *globlist) }}'
vars:
globlist: '{{ mygroups | map("regex_replace", "^(.*)$", "files/\1/*.conf") | list }}'
.. _complex_type_transformations:
Complex Type transformations
=============================
Jinja provides filters for simple data type transformations (``int``, ``bool``, and so on), but when you want to transform data structures things are not as easy.
You can use loops and list comprehensions as shown above to help, also other filters and lookups can be chained and used to achieve more complex transformations.
.. _create_dictionary_from_list:
Create dictionary from list
---------------------------
In most languages it is easy to create a dictionary (a.k.a. map/associative array/hash and so on) from a list of pairs, in Ansible there are a couple of ways to do it and the best one for you might depend on the source of your data.
These example produces ``{"a": "b", "c": "d"}``
.. code-block:: YAML+Jinja
:caption: Simple list to dict by assuming the list is [key, value , key, value, ...]
vars:
single_list: [ 'a', 'b', 'c', 'd' ]
mydict: "{{ dict(single_list | slice(2)) }}"
.. code-block:: YAML+Jinja
:caption: It is simpler when we have a list of pairs:
vars:
list_of_pairs: [ ['a', 'b'], ['c', 'd'] ]
mydict: "{{ dict(list_of_pairs) }}"
Both end up being the same thing, with ``slice(2)`` transforming ``single_list`` to a ``list_of_pairs`` generator.
A bit more complex, using ``set_fact`` and a ``loop`` to create/update a dictionary with key value pairs from 2 lists:
.. code-block:: YAML+Jinja
:caption: Using set_fact to create a dictionary from a set of lists
:emphasize-lines: 3, 4
- name: Uses 'combine' to update the dictionary and 'zip' to make pairs of both lists
ansible.builtin.set_fact:
mydict: "{{ mydict | default({}) | combine({item[0]: item[1]}) }}"
loop: "{{ (keys | zip(values)) | list }}"
vars:
keys:
- foo
- var
- bar
values:
- a
- b
- c
This results in ``{"foo": "a", "var": "b", "bar": "c"}``.
You can even combine these simple examples with other filters and lookups to create a dictionary dynamically by matching patterns to variable names:
.. code-block:: YAML+Jinja
:caption: Using 'vars' to define dictionary from a set of lists without needing a task
vars:
xyz_stuff: 1234
xyz_morestuff: 567
myvarnames: "{{ q('varnames', '^xyz_') }}"
mydict: "{{ dict(myvarnames|map('regex_replace', '^xyz_', '')|list | zip(q('vars', *myvarnames))) }}"
A quick explanation, since there is a lot to unpack from these two lines:
- The ``varnames`` lookup returns a list of variables that match "begin with ``xyz_``".
- Then feeding the list from the previous step into the ``vars`` lookup to get the list of values.
The ``*`` is used to 'dereference the list' (a pythonism that works in Jinja), otherwise it would take the list as a single argument.
- Both lists get passed to the ``zip`` filter to pair them off into a unified list (key, value, key2, value2, ...).
- The dict function then takes this 'list of pairs' to create the dictionary.
An example on how to use facts to find a host's data that meets condition X:
.. code-block:: YAML+Jinja
vars:
uptime_of_host_most_recently_rebooted: "{{ansible_play_hosts_all | map('extract', hostvars, 'ansible_uptime_seconds') | sort | first}}"
An example to show a host uptime in days/hours/minutes/seconds (assumes facts were gathered).
.. code-block:: YAML+Jinja
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
.. seealso::
:ref:`playbooks_filters`
Jinja2 filters included with Ansible
:ref:`playbooks_tests`
Jinja2 tests included with Ansible
`Jinja2 Docs <https://jinja.palletsprojects.com/>`_
Jinja2 documentation, includes lists for core filters and tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,661 |
package_facts requires python-rpm on SUSE systems in ansible 2.12.1
|
### Summary
This bug report is identical to #60707, which is about missing documentation for package_facts module, but affects the missing documention requirement for python-rpm on SUSE distributions.
### Issue Type
Bug Report
### Component Name
package_facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
```
### Configuration
```console
Not applicable
```
### OS / Environment
SUSE Enterprise 12
SUSE Enterprise 15
OpenSUSE Tumbleweed 15.3
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
zypper rm python3-rpm
```
Then run the following playbook on an ansible controller node to test locally.
```
name: Playbook to test package_facts module locally on Suse based systems.
hosts: localhost
become: yes
connection: local
gather_facts: yes
tasks:
- name: Gather package facts on Suse based systems
package_facts:
manager: auto
```
### Expected Results
```
$ ansible-playbook package.yml
PLAY [Playbook to test package_facts module locally on Suse based systems.] *******
TASK [Gathering Facts] *****************************************************************
ok: [localhost]
TASK [Gather package facts on Suse based systems] ************************************
ok: [localhost]
PLAY RECAP *****************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ACTUAL RESULTS
$ ansible-playbook package.yml
PLAY [Playbook to test package_facts module locally on Suse based system.] *******
TASK [Gathering Facts] *****************************************************************
ok: [localhost]
TASK [Gather package facts on Suse based systems] ************************************
[WARNING]: Found "rpm" but Failed to import the required Python library (rpm) on localhost's Python /usr/bin/python3.6. Please read the module
documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter,
please consult the documentation on ansible_python_interpreter
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not detect a supported package manager from the following list: ['portage', 'rpm', 'apk', 'pkg', 'pacman', 'apt'], or the required Python library is not installed. Check warnings for details."}
```
Temporary workaround:
To install python-rpm package on any Suse based systems where package_facts module is going to be used for.
### Actual Results
```console
$ ansible-playbook package.yml
PLAY [Playbook to test package_facts module locally on Suse based system.] *******
TASK [Gathering Facts] *****************************************************************
ok: [localhost]
TASK [Gather package facts on Suse based systems] ************************************
[WARNING]: Found "rpm" but Failed to import the required Python library (rpm) on lab-224's Python /usr/bin/python3.6. Please read the module
documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter,
please consult the documentation on ansible_python_interpreter
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not detect a supported package manager from the following list: ['portage', 'rpm', 'apk', 'pkg', 'pacman', 'apt'], or the required Python library is not installed. Check warnings for details."}
PLAY RECAP *****************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79661
|
https://github.com/ansible/ansible/pull/80041
|
5ad77fc7bb529d9733a17c1ef5d24a84b98f50d3
|
43aa47c2afb8292fa8ad257353dc3500dda347b9
| 2023-01-04T13:08:06Z |
python
| 2023-02-23T15:28:52Z |
lib/ansible/modules/package_facts.py
|
# (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# most of it copied from AWX's scan_packages module
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: package_facts
short_description: Package information as facts
description:
- Return information about installed packages as facts.
options:
manager:
description:
- The package manager used by the system so we can query the package information.
- Since 2.8 this is a list and can support multiple package managers per system.
- The 'portage' and 'pkg' options were added in version 2.8.
- The 'apk' option was added in version 2.11.
- The 'pkg_info' option was added in version 2.13.
default: ['auto']
choices: ['auto', 'rpm', 'apt', 'portage', 'pkg', 'pacman', 'apk', 'pkg_info']
type: list
elements: str
strategy:
description:
- This option controls how the module queries the package managers on the system.
C(first) means it will return only information for the first supported package manager available.
C(all) will return information for all supported and available package managers on the system.
choices: ['first', 'all']
default: 'first'
type: str
version_added: "2.8"
version_added: "2.5"
requirements:
- For 'portage' support it requires the C(qlist) utility, which is part of 'app-portage/portage-utils'.
- For Debian-based systems C(python-apt) package must be installed on targeted hosts.
author:
- Matthew Jones (@matburt)
- Brian Coca (@bcoca)
- Adam Miller (@maxamillion)
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: none
facts:
support: full
platform:
platforms: posix
'''
EXAMPLES = '''
- name: Gather the package facts
ansible.builtin.package_facts:
manager: auto
- name: Print the package facts
ansible.builtin.debug:
var: ansible_facts.packages
- name: Check whether a package called foobar is installed
ansible.builtin.debug:
msg: "{{ ansible_facts.packages['foobar'] | length }} versions of foobar are installed!"
when: "'foobar' in ansible_facts.packages"
'''
RETURN = '''
ansible_facts:
description: Facts to add to ansible_facts.
returned: always
type: complex
contains:
packages:
description:
- Maps the package name to a non-empty list of dicts with package information.
- Every dict in the list corresponds to one installed version of the package.
- The fields described below are present for all package managers. Depending on the
package manager, there might be more fields for a package.
returned: when operating system level package manager is specified or auto detected manager
type: dict
contains:
name:
description: The package's name.
returned: always
type: str
version:
description: The package's version.
returned: always
type: str
source:
description: Where information on the package came from.
returned: always
type: str
sample: |-
{
"packages": {
"kernel": [
{
"name": "kernel",
"source": "rpm",
"version": "3.10.0",
...
},
{
"name": "kernel",
"source": "rpm",
"version": "3.10.0",
...
},
...
],
"kernel-tools": [
{
"name": "kernel-tools",
"source": "rpm",
"version": "3.10.0",
...
}
],
...
}
}
# Sample rpm
{
"packages": {
"kernel": [
{
"arch": "x86_64",
"epoch": null,
"name": "kernel",
"release": "514.26.2.el7",
"source": "rpm",
"version": "3.10.0"
},
{
"arch": "x86_64",
"epoch": null,
"name": "kernel",
"release": "514.16.1.el7",
"source": "rpm",
"version": "3.10.0"
},
{
"arch": "x86_64",
"epoch": null,
"name": "kernel",
"release": "514.10.2.el7",
"source": "rpm",
"version": "3.10.0"
},
{
"arch": "x86_64",
"epoch": null,
"name": "kernel",
"release": "514.21.1.el7",
"source": "rpm",
"version": "3.10.0"
},
{
"arch": "x86_64",
"epoch": null,
"name": "kernel",
"release": "693.2.2.el7",
"source": "rpm",
"version": "3.10.0"
}
],
"kernel-tools": [
{
"arch": "x86_64",
"epoch": null,
"name": "kernel-tools",
"release": "693.2.2.el7",
"source": "rpm",
"version": "3.10.0"
}
],
"kernel-tools-libs": [
{
"arch": "x86_64",
"epoch": null,
"name": "kernel-tools-libs",
"release": "693.2.2.el7",
"source": "rpm",
"version": "3.10.0"
}
],
}
}
# Sample deb
{
"packages": {
"libbz2-1.0": [
{
"version": "1.0.6-5",
"source": "apt",
"arch": "amd64",
"name": "libbz2-1.0"
}
],
"patch": [
{
"version": "2.7.1-4ubuntu1",
"source": "apt",
"arch": "amd64",
"name": "patch"
}
],
}
}
# Sample pkg_info
{
"packages": {
"curl": [
{
"name": "curl",
"source": "pkg_info",
"version": "7.79.0"
}
],
"intel-firmware": [
{
"name": "intel-firmware",
"source": "pkg_info",
"version": "20210608v0"
}
],
}
}
'''
import re
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.facts.packages import LibMgr, CLIMgr, get_all_pkg_managers
class RPM(LibMgr):
LIB = 'rpm'
def list_installed(self):
return self._lib.TransactionSet().dbMatch()
def get_package_details(self, package):
return dict(name=package[self._lib.RPMTAG_NAME],
version=package[self._lib.RPMTAG_VERSION],
release=package[self._lib.RPMTAG_RELEASE],
epoch=package[self._lib.RPMTAG_EPOCH],
arch=package[self._lib.RPMTAG_ARCH],)
def is_available(self):
''' we expect the python bindings installed, but this gives warning if they are missing and we have rpm cli'''
we_have_lib = super(RPM, self).is_available()
try:
get_bin_path('rpm')
if not we_have_lib and not has_respawned():
# try to locate an interpreter with the necessary lib
interpreters = ['/usr/libexec/platform-python',
'/usr/bin/python3',
'/usr/bin/python2']
interpreter_path = probe_interpreters_for_module(interpreters, self.LIB)
if interpreter_path:
respawn_module(interpreter_path)
# end of the line for this process; this module will exit when the respawned copy completes
if not we_have_lib:
module.warn('Found "rpm" but %s' % (missing_required_lib(self.LIB)))
except ValueError:
pass
return we_have_lib
class APT(LibMgr):
LIB = 'apt'
def __init__(self):
self._cache = None
super(APT, self).__init__()
@property
def pkg_cache(self):
if self._cache is not None:
return self._cache
self._cache = self._lib.Cache()
return self._cache
def is_available(self):
''' we expect the python bindings installed, but if there is apt/apt-get give warning about missing bindings'''
we_have_lib = super(APT, self).is_available()
if not we_have_lib:
for exe in ('apt', 'apt-get', 'aptitude'):
try:
get_bin_path(exe)
except ValueError:
continue
else:
if not has_respawned():
# try to locate an interpreter with the necessary lib
interpreters = ['/usr/bin/python3',
'/usr/bin/python2']
interpreter_path = probe_interpreters_for_module(interpreters, self.LIB)
if interpreter_path:
respawn_module(interpreter_path)
# end of the line for this process; this module will exit here when respawned copy completes
module.warn('Found "%s" but %s' % (exe, missing_required_lib('apt')))
break
return we_have_lib
def list_installed(self):
# Store the cache to avoid running pkg_cache() for each item in the comprehension, which is very slow
cache = self.pkg_cache
return [pk for pk in cache.keys() if cache[pk].is_installed]
def get_package_details(self, package):
ac_pkg = self.pkg_cache[package].installed
return dict(name=package, version=ac_pkg.version, arch=ac_pkg.architecture, category=ac_pkg.section, origin=ac_pkg.origins[0].origin)
class PACMAN(CLIMgr):
CLI = 'pacman'
def list_installed(self):
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command([self._cli, '-Qi'], environ_update=dict(LC_ALL=locale))
if rc != 0 or err:
raise Exception("Unable to list packages rc=%s : %s" % (rc, err))
return out.split("\n\n")[:-1]
def get_package_details(self, package):
# parse values of details that might extend over several lines
raw_pkg_details = {}
last_detail = None
for line in package.splitlines():
m = re.match(r"([\w ]*[\w]) +: (.*)", line)
if m:
last_detail = m.group(1)
raw_pkg_details[last_detail] = m.group(2)
else:
# append value to previous detail
raw_pkg_details[last_detail] = raw_pkg_details[last_detail] + " " + line.lstrip()
provides = None
if raw_pkg_details['Provides'] != 'None':
provides = [
p.split('=')[0]
for p in raw_pkg_details['Provides'].split(' ')
]
return {
'name': raw_pkg_details['Name'],
'version': raw_pkg_details['Version'],
'arch': raw_pkg_details['Architecture'],
'provides': provides,
}
class PKG(CLIMgr):
CLI = 'pkg'
atoms = ['name', 'version', 'origin', 'installed', 'automatic', 'arch', 'category', 'prefix', 'vital']
def list_installed(self):
rc, out, err = module.run_command([self._cli, 'query', "%%%s" % '\t%'.join(['n', 'v', 'R', 't', 'a', 'q', 'o', 'p', 'V'])])
if rc != 0 or err:
raise Exception("Unable to list packages rc=%s : %s" % (rc, err))
return out.splitlines()
def get_package_details(self, package):
pkg = dict(zip(self.atoms, package.split('\t')))
if 'arch' in pkg:
try:
pkg['arch'] = pkg['arch'].split(':')[2]
except IndexError:
pass
if 'automatic' in pkg:
pkg['automatic'] = bool(int(pkg['automatic']))
if 'category' in pkg:
pkg['category'] = pkg['category'].split('/', 1)[0]
if 'version' in pkg:
if ',' in pkg['version']:
pkg['version'], pkg['port_epoch'] = pkg['version'].split(',', 1)
else:
pkg['port_epoch'] = 0
if '_' in pkg['version']:
pkg['version'], pkg['revision'] = pkg['version'].split('_', 1)
else:
pkg['revision'] = '0'
if 'vital' in pkg:
pkg['vital'] = bool(int(pkg['vital']))
return pkg
class PORTAGE(CLIMgr):
CLI = 'qlist'
atoms = ['category', 'name', 'version', 'ebuild_revision', 'slots', 'prefixes', 'sufixes']
def list_installed(self):
rc, out, err = module.run_command(' '.join([self._cli, '-Iv', '|', 'xargs', '-n', '1024', 'qatom']), use_unsafe_shell=True)
if rc != 0:
raise RuntimeError("Unable to list packages rc=%s : %s" % (rc, to_native(err)))
return out.splitlines()
def get_package_details(self, package):
return dict(zip(self.atoms, package.split()))
class APK(CLIMgr):
CLI = 'apk'
def list_installed(self):
rc, out, err = module.run_command([self._cli, 'info', '-v'])
if rc != 0 or err:
raise Exception("Unable to list packages rc=%s : %s" % (rc, err))
return out.splitlines()
def get_package_details(self, package):
raw_pkg_details = {'name': package, 'version': '', 'release': ''}
nvr = package.rsplit('-', 2)
try:
return {
'name': nvr[0],
'version': nvr[1],
'release': nvr[2],
}
except IndexError:
return raw_pkg_details
class PKG_INFO(CLIMgr):
CLI = 'pkg_info'
def list_installed(self):
rc, out, err = module.run_command([self._cli, '-a'])
if rc != 0 or err:
raise Exception("Unable to list packages rc=%s : %s" % (rc, err))
return out.splitlines()
def get_package_details(self, package):
raw_pkg_details = {'name': package, 'version': ''}
details = package.split(maxsplit=1)[0].rsplit('-', maxsplit=1)
try:
return {
'name': details[0],
'version': details[1],
}
except IndexError:
return raw_pkg_details
def main():
# get supported pkg managers
PKG_MANAGERS = get_all_pkg_managers()
PKG_MANAGER_NAMES = [x.lower() for x in PKG_MANAGERS.keys()]
# start work
global module
module = AnsibleModule(argument_spec=dict(manager={'type': 'list', 'elements': 'str', 'default': ['auto']},
strategy={'choices': ['first', 'all'], 'default': 'first'}),
supports_check_mode=True)
packages = {}
results = {'ansible_facts': {}}
managers = [x.lower() for x in module.params['manager']]
strategy = module.params['strategy']
if 'auto' in managers:
# keep order from user, we do dedupe below
managers.extend(PKG_MANAGER_NAMES)
managers.remove('auto')
unsupported = set(managers).difference(PKG_MANAGER_NAMES)
if unsupported:
if 'auto' in module.params['manager']:
msg = 'Could not auto detect a usable package manager, check warnings for details.'
else:
msg = 'Unsupported package managers requested: %s' % (', '.join(unsupported))
module.fail_json(msg=msg)
found = 0
seen = set()
for pkgmgr in managers:
if found and strategy == 'first':
break
# dedupe as per above
if pkgmgr in seen:
continue
seen.add(pkgmgr)
try:
try:
# manager throws exception on init (calls self.test) if not usable.
manager = PKG_MANAGERS[pkgmgr]()
if manager.is_available():
found += 1
packages.update(manager.get_packages())
except Exception as e:
if pkgmgr in module.params['manager']:
module.warn('Requested package manager %s was not usable by this module: %s' % (pkgmgr, to_text(e)))
continue
except Exception as e:
if pkgmgr in module.params['manager']:
module.warn('Failed to retrieve packages with %s: %s' % (pkgmgr, to_text(e)))
if found == 0:
msg = ('Could not detect a supported package manager from the following list: %s, '
'or the required Python library is not installed. Check warnings for details.' % managers)
module.fail_json(msg=msg)
# Set the facts, this will override the facts in ansible_facts that might exist from previous runs
# when using operating system level or distribution package managers
results['ansible_facts']['packages'] = packages
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,612 |
The unarchive module fails with a relative path for `dest`
|
##### SUMMARY
This works on the host:
```
curl -L https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz | tar xz
```
Using the `unarchive` module to do the same thing does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
m:unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = ['/Users/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.5_1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: macOS 10.14.6
Host: Ubuntu 19.10
The `tar` binary has been installed using `sudo apt install tar` and `/usr/bin/tar` is present.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get geckodriver
hosts: all
tasks:
- name: Fetch and extract geckodriver
unarchive:
src: https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
# this directory exists on the host:
dest: crawler-stuff
remote_src: yes
# Tried with and without the following:
#extra_opts: xz
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Get geckodriver] *****************************************************************************
TASK [Fetch and extract geckodriver] ****************************************************************************
fatal: [foo@hostname]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1573229459.674401-203357087796695/geckodriver-v0.26.0-linux64.tar0RChLg.gz\". Make sure the required command to extract the file is installed. Command \"/usr/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive."}
```
|
https://github.com/ansible/ansible/issues/64612
|
https://github.com/ansible/ansible/pull/75267
|
f47bc03599eedc48753d2cd5e1bea177f35e6133
|
a56428de11ead49bb172f78fb7d8c971deb8e0e5
| 2019-11-08T16:32:07Z |
python
| 2023-03-01T15:54:00Z |
changelogs/fragments/64612-unarchive-relative-path-dest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,612 |
The unarchive module fails with a relative path for `dest`
|
##### SUMMARY
This works on the host:
```
curl -L https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz | tar xz
```
Using the `unarchive` module to do the same thing does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
m:unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = ['/Users/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.5_1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: macOS 10.14.6
Host: Ubuntu 19.10
The `tar` binary has been installed using `sudo apt install tar` and `/usr/bin/tar` is present.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get geckodriver
hosts: all
tasks:
- name: Fetch and extract geckodriver
unarchive:
src: https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
# this directory exists on the host:
dest: crawler-stuff
remote_src: yes
# Tried with and without the following:
#extra_opts: xz
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Get geckodriver] *****************************************************************************
TASK [Fetch and extract geckodriver] ****************************************************************************
fatal: [foo@hostname]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1573229459.674401-203357087796695/geckodriver-v0.26.0-linux64.tar0RChLg.gz\". Make sure the required command to extract the file is installed. Command \"/usr/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive."}
```
|
https://github.com/ansible/ansible/issues/64612
|
https://github.com/ansible/ansible/pull/75267
|
f47bc03599eedc48753d2cd5e1bea177f35e6133
|
a56428de11ead49bb172f78fb7d8c971deb8e0e5
| 2019-11-08T16:32:07Z |
python
| 2023-03-01T15:54:00Z |
lib/ansible/modules/unarchive.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
- The given path must exist. Base directory is not created by this module.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
- The specified absolute path (file or directory) must be below the base path given with C(dest:).
type: path
version_added: "1.6"
io_buffer_size:
description:
- Size of the volatile memory buffer that is used for extracting files from the archive in bytes.
type: int
default: 65536
version_added: "2.12"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
- Mutually exclusive with C(include).
type: list
default: []
elements: str
version_added: "2.1"
include:
description:
- List of directory and file entries that you would like to extract from the archive. If C(include)
is not empty, only files listed here will be extracted.
- Mutually exclusive with C(exclude).
type: list
default: []
elements: str
version_added: "2.11"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: []
version_added: "2.1"
remote_src:
description:
- Set to C(true) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(false) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(true).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
- action_common_attributes.files
- decrypt
- files
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: partial
details: Not supported for gzipped tar files.
diff_mode:
support: partial
details: Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive.
platform:
platforms: posix
safe_file_operations:
support: none
vault:
support: full
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Requires C(zstd) command on target host to expand I(.tar.zst) files.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
ansible.builtin.unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
RETURN = r'''
dest:
description: Path to the destination directory.
returned: always
type: str
sample: /opt/software
files:
description: List of all the files in the archive.
returned: When I(list_files) is True
type: list
sample: '["file1", "file2"]'
gid:
description: Numerical ID of the group that owns the destination directory.
returned: always
type: int
sample: 1000
group:
description: Name of the group that owns the destination directory.
returned: always
type: str
sample: "librarians"
handler:
description: Archive software handler used to extract and decompress the archive.
returned: always
type: str
sample: "TgzArchive"
mode:
description: String that represents the octal permissions of the destination directory.
returned: always
type: str
sample: "0755"
owner:
description: Name of the user that owns the destination directory.
returned: always
type: str
sample: "paul"
size:
description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within.
returned: always
type: int
sample: 36
src:
description:
- The source archive's path.
- If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored.
returned: always
type: str
sample: "/home/paul/test.tar.gz"
state:
description: State of the destination. Effectively always "directory".
returned: always
type: str
sample: "directory"
uid:
description: Numerical ID of the user that owns the destination directory.
returned: always
type: int
sample: 1000
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from functools import partial
from zipfile import ZipFile, BadZipfile
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_file
try: # python 3.3+
from shlex import quote # type: ignore[attr-defined]
except ImportError: # older python
from pipes import quote
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path, buffer_size):
''' Return a CRC32 checksum of a file '''
crc = binascii.crc32(b'')
with open(path, 'rb') as f:
for b_block in iter(partial(f.read, buffer_size), b''):
crc = binascii.crc32(b_block, crc)
return crc & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.io_buffer_size = module.params["io_buffer_size"]
self.excludes = module.params['exclude']
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
self.zipinfo_cmd_path = None
self._files_in_archive = []
self._infodict = dict()
self.zipinfoflag = ''
self.binaries = (
('unzip', 'cmd_path'),
('zipinfo', 'zipinfo_cmd_path'),
)
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
rc, out, err = self.module.run_command([self.cmd_path, '-v', self.src])
if rc:
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
if self.include_files:
for include in self.include_files:
if fnmatch.fnmatch(member, include):
self._files_in_archive.append(to_native(member))
else:
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception as e:
archive.close()
raise UnarchiveError('Unable to list files in the archive: %s' % to_native(e))
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
if self.zipinfoflag:
cmd = [self.zipinfo_cmd_path, self.zipinfoflag, '-T', '-s', self.src]
else:
cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
rc, out, err = self.module.run_command(cmd)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest, self.io_buffer_size)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
missing = []
for b in self.binaries:
try:
setattr(self, b[1], get_bin_path(b[0]))
except ValueError:
missing.append(b[0])
if missing:
return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, err)
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
self.include_files = self.module.params['include']
self.cmd_path = None
self.tar_type = None
self.zipflag = '-z'
self._files_in_archive = []
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
if rc != 0:
raise UnarchiveError('Unable to list files in the archive: %s' % err)
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
try:
self.cmd_path = get_bin_path('gtar')
except ValueError:
# Fallback to tar
try:
self.cmd_path = get_bin_path('tar')
except ValueError:
return False, "Unable to find required 'gtar' or 'tar' binary in the path"
self.tar_type = self._get_tar_type()
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError as e:
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, to_native(e))
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# Class to handle zstd compressed tar files
class TarZstdArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarZstdArchive, self).__init__(src, b_dest, file_args, module)
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
#
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
self.zipflag = '--use-compress-program=zstd'
class ZipZArchive(ZipArchive):
def __init__(self, src, b_dest, file_args, module):
super(ZipZArchive, self).__init__(src, b_dest, file_args, module)
self.zipinfoflag = '-Z'
self.binaries = (
('unzip', 'cmd_path'),
('unzip', 'zipinfo_cmd_path'),
)
def can_handle_archive(self):
unzip_available, error_msg = super(ZipZArchive, self).can_handle_archive()
if not unzip_available:
return unzip_available, error_msg
# Ensure unzip -Z is available before we use it in is_unarchive
cmd = [self.zipinfo_cmd_path, self.zipinfoflag]
rc, out, err = self.module.run_command(cmd)
if 'zipinfo' in out.lower():
return True, None
return False, 'Command "unzip -Z" could not handle archive: %s' % err
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, ZipZArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = '\n'.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed.\n%s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
include=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
io_buffer_size=dict(type='int', default=64 * 1024),
# Options that are for the action plugin, but ignored by the module itself.
# We have them here so that the sanity tests pass without ignores, which
# reduces the likelihood of further bugs added.
copy=dict(type='bool', default=True),
decrypt=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
mutually_exclusive=[('include', 'exclude')],
)
src = module.params['src']
dest = module.params['dest']
b_dest = to_bytes(dest, errors='surrogate_or_strict')
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
top_folders = []
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if '/' in filename:
top_folder_path = filename.split('/')[0]
if top_folder_path not in top_folders:
top_folders.append(top_folder_path)
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
if top_folders:
for f in top_folders:
file_args['path'] = "%s/%s" % (dest, f)
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,612 |
The unarchive module fails with a relative path for `dest`
|
##### SUMMARY
This works on the host:
```
curl -L https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz | tar xz
```
Using the `unarchive` module to do the same thing does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
m:unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = ['/Users/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.5_1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: macOS 10.14.6
Host: Ubuntu 19.10
The `tar` binary has been installed using `sudo apt install tar` and `/usr/bin/tar` is present.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get geckodriver
hosts: all
tasks:
- name: Fetch and extract geckodriver
unarchive:
src: https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
# this directory exists on the host:
dest: crawler-stuff
remote_src: yes
# Tried with and without the following:
#extra_opts: xz
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Get geckodriver] *****************************************************************************
TASK [Fetch and extract geckodriver] ****************************************************************************
fatal: [foo@hostname]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1573229459.674401-203357087796695/geckodriver-v0.26.0-linux64.tar0RChLg.gz\". Make sure the required command to extract the file is installed. Command \"/usr/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive."}
```
|
https://github.com/ansible/ansible/issues/64612
|
https://github.com/ansible/ansible/pull/75267
|
f47bc03599eedc48753d2cd5e1bea177f35e6133
|
a56428de11ead49bb172f78fb7d8c971deb8e0e5
| 2019-11-08T16:32:07Z |
python
| 2023-03-01T15:54:00Z |
test/integration/targets/unarchive/tasks/main.yml
|
- import_tasks: prepare_tests.yml
- import_tasks: test_missing_binaries.yml
- import_tasks: test_tar.yml
- import_tasks: test_tar_gz.yml
- import_tasks: test_tar_gz_creates.yml
- import_tasks: test_tar_gz_owner_group.yml
- import_tasks: test_tar_gz_keep_newer.yml
- import_tasks: test_tar_zst.yml
- import_tasks: test_zip.yml
- import_tasks: test_exclude.yml
- import_tasks: test_include.yml
- import_tasks: test_parent_not_writeable.yml
- import_tasks: test_mode.yml
- import_tasks: test_quotable_characters.yml
- import_tasks: test_non_ascii_filename.yml
- import_tasks: test_missing_files.yml
- import_tasks: test_symlink.yml
- import_tasks: test_download.yml
- import_tasks: test_unprivileged_user.yml
- import_tasks: test_different_language_var.yml
- import_tasks: test_invalid_options.yml
- import_tasks: test_ownership_top_folder.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,612 |
The unarchive module fails with a relative path for `dest`
|
##### SUMMARY
This works on the host:
```
curl -L https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz | tar xz
```
Using the `unarchive` module to do the same thing does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
m:unarchive
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = ['/Users/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.5_1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: macOS 10.14.6
Host: Ubuntu 19.10
The `tar` binary has been installed using `sudo apt install tar` and `/usr/bin/tar` is present.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get geckodriver
hosts: all
tasks:
- name: Fetch and extract geckodriver
unarchive:
src: https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
# this directory exists on the host:
dest: crawler-stuff
remote_src: yes
# Tried with and without the following:
#extra_opts: xz
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [Get geckodriver] *****************************************************************************
TASK [Fetch and extract geckodriver] ****************************************************************************
fatal: [foo@hostname]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1573229459.674401-203357087796695/geckodriver-v0.26.0-linux64.tar0RChLg.gz\". Make sure the required command to extract the file is installed. Command \"/usr/bin/tar\" could not handle archive. Command \"/usr/bin/unzip\" could not handle archive."}
```
|
https://github.com/ansible/ansible/issues/64612
|
https://github.com/ansible/ansible/pull/75267
|
f47bc03599eedc48753d2cd5e1bea177f35e6133
|
a56428de11ead49bb172f78fb7d8c971deb8e0e5
| 2019-11-08T16:32:07Z |
python
| 2023-03-01T15:54:00Z |
test/integration/targets/unarchive/tasks/test_relative_dest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,825 |
Existing APT repositories keys are not detected correctly
|
### Summary
I'm behind a proxy so I have to use workarounds to be able to download gpg keys.
However, ansible.builtin.apt_repository doesn't detect correctly already existing GPG keys using apt-key so it tries to download them again using apt-key --recv-keys which fails because I'm behind a proxy.
In the output attached to the "actual results" section, I had modified apt_repository.py to print apt-key export rc, stdout and stderr.
### Issue Type
Bug Report
### Component Name
apt_repository
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
```
### OS / Environment
Ubuntu 20.04, provisioning using Vagrant and the ansible_local provisioner.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
I'm behind a proxy so I have to use the following workaround to be able to download gpg keys:
```yaml
- name: Install wireshark-dev gpg key
ansible.builtin.apt_key:
data: "{{ lookup('url', 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xd875551314eca0f0', split_lines=False) }}"
state: present
register: pkg_result
until: pkg_result is success
- name: Adding wireshark-dev ppa repository
ansible.builtin.apt_repository:
repo: 'ppa:wireshark-dev/stable'
state: present
```
### Expected Results
ansible.builtin.apt_repository should not try to download the existing GPG key using apt-key.
However, it tries to download it and this fails because I'm behind a proxy.
I've found the following code to be the culprit in the apt_repository.py module:
```python
def _key_already_exists(self, key_fingerprint):
if self.apt_key_bin:
rc, out, err = self.module.run_command([self.apt_key_bin, 'export', key_fingerprint], check_rc=True)
found = len(err) == 0
```
The code checks for an empty stderr, but apt-key issues the following warning message on stderr:
> Warning: apt-key output should not be parsed (stdout is not a terminal)
When a key doesn't not exist locally, here is the content of stderr:
> Warning: apt-key output should not be parsed (stdout is not a terminal)
> gpg: WARNING: nothing exported
In version 2.12.10, the easiest way to patch it is to change line 456 to the following:
```python
return ("nothing exported" not in err)
```
In the current master branch, the easiest way to patch it is to change line 475 to the following:
```python
found = ("nothing exported" not in err)
```
### Actual Results
```console
36122 1674752689.12920: _low_level_execute_command() done: rc=1, stdout=rc=0, out=-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUDPR4BEADzM3Hc8HzTOu9fypxMIOGtcUVTCGSSI0NgtdTp96HZGtuQlweB
inSEVSauBcPYvW3UTt0fbOotMt6rACFUiH0bs1y20rXuHgVFkQmSfTQT+qDaCYud
g2MV8RFObQ+/MQrjnSqpSiNoLagW4+x6whY9wIlT4gpHbZLZ74/4ZxMkKdDFFQyn
jonQHW1w4iZXoMUGeFUcR7koZo5UKK+3DmNaWP2oj9sHzlHeFpXoshiLY05uL0E0
Tzu3+NHTbjMl88BezfYYfMawgIJyaA9PIoDe+Z/72dwjWYWWjqCsKv2BqBMScp82
5Ewa2i1aeH2RQ/l8ipZzcKKqK5y5LfIb6AgwsDStFet+43Gt1F1GoaOckkxdOpal
Mg/sNGORVHVV3/kxfGzQyj2KakfbkaGCSGRnj8iarhLhxr9tR8QP2kdCOBWkmYXJ
esei7hfrpA5BbrkUEyQ4lSCnPW8tBXfjgdUjv0kcs/nTOTg7v4FbhzvK0+fKSnjL
Pe+4fH8NNQLC79/EAiQztKTHojKFVUQfydn5VQO1nJ/Fq0ulyuMk23ezddqbtvs4
5UXIrzJSro5nJnigKGRR9Was6M8wdnke8qhqWN5pCmO5txx1AIZdOkwh0UqaVnTF
Kt0kDW/PkDGzm7qYCYLqphXi9Z4Dg2y9zGOnjxpTt30T6M2kl/GV6RFoHQARAQAB
tBhMYXVuY2hwYWQgUFBBIGZvciBSRU1udXiJAjgEEwECACIFAlUDPR4CGwMGCwkI
BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEL/0UBZ4jeEVSMwQAN7pAtv1DzaXwaRL
N7qLKYheLGe5Y0IK15laxqnsBDqZTy4omLzvRPJ7UP3/EMLn3VSRtCZ2oZUGRfeu
UfW6Ms9tPHzrchXnqKM3oJInA4rvYeQ9NUwntLSY1S46/6NIDmY5t5XnR7gB/L4v
B616fFDYeaygum9BBvIS9VgEP0LFItA6H3eaVR3aWYNAjCIU6K6m9idhAodeo8HC
qgSEIDspXc9lL2rE1hTzUPrv8hpeFCmV6d/JwpSoCs8I7uZ81TjCRae7ydcq8g32
/F9zzHYMHJmCCmv2/zbi/yCroaEWhK7ouz0QtJPsSnytnrSFoTiGvHEd7LWIFxKs
itdV++Oz9IrxqSQ5Ww/+38U/SKOhN5zGl2H2nP8qLpjPyfIa/kNAXGkjHRuitWNA
6saTAFRSYGaitgQZmKzHrXQ4ZjFraMm6jfpwwGhqmIgZDjMWjqPSVV+LxnaaB5RW
n0EZczPCvkkierIy7pgTKeAeKVEkbLysnfwEQVBWxfziShbUWCQLnwXjY5kYStWF
1popjpX75KwbPEQlNgTS83QC20W19g2L+Z0r3NGEyz5hPGhAqbwN89/2sB0hUORK
VCu/lZu4kUIcOBGcSvQsqsBWIIewMPbxm7so/ba8je3WKDoPLqWDyIi5iTGy3WV8
Pw+Ns1dA9rRc3SmwLi01W390osKB
=jXnP
-----END PGP PUBLIC KEY BLOCK-----
, err=Warning: apt-key output should not be parsed (stdout is not a terminal)
{"cmd": "apt-key adv --recv-keys --no-tty --keyserver hkp://keyserver.ubuntu.com:80 E90F33EEF615660D25A02D32BFF45016788DE115", "rc": 2, "stdout": "Executing: /tmp/apt-key-gpghome.t2vhexoMJ0/gpg.1.sh --recv-keys --no-tty -
-keyserver hkp://keyserver.ubuntu.com:80 E90F33EEF615660D25A02D32BFF45016788DE115\n", "stderr": "Warning: apt-key output should not be parsed (stdout is not a terminal)\ngpg: keyserver receive failed: No name\n", "failed"
: true, "msg": "Warning: apt-key output should not be parsed (stdout is not a terminal)\ngpg: keyserver receive failed: No name", "invocation": {"module_args": {"repo": "ppa:remnux/stable", "state": "present", "update_cac
he": true, "update_cache_retries": 5, "update_cache_retry_max_delay": 12, "install_python_apt": true, "validate_certs": true, "mode": null, "filename": null, "codename": null}}}
, stderr=
36122 1674752689.12943: done with _execute_module (ansible.builtin.apt_repository, {'repo': 'ppa:remnux/stable', 'state': 'present', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 4, '_ansible_version': '2.12.10', '_ansible_module_name': 'ansible.builtin.apt_repository', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/root/.ansible/tmp/ansible-tmp-1674752688.6440187-36122-146911848313704/', '_ansible_remote_tmp': '~/.ansible/tmp'})
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79825
|
https://github.com/ansible/ansible/pull/79827
|
ff3ee9c4bdac68909bcb769091a198a7c45e6350
|
ca604513dbd8f7db590399f031a12dec38cd90d3
| 2023-01-26T18:28:23Z |
python
| 2023-03-06T21:14:35Z |
changelogs/fragments/apt_repo_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,825 |
Existing APT repositories keys are not detected correctly
|
### Summary
I'm behind a proxy so I have to use workarounds to be able to download gpg keys.
However, ansible.builtin.apt_repository doesn't detect correctly already existing GPG keys using apt-key so it tries to download them again using apt-key --recv-keys which fails because I'm behind a proxy.
In the output attached to the "actual results" section, I had modified apt_repository.py to print apt-key export rc, stdout and stderr.
### Issue Type
Bug Report
### Component Name
apt_repository
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
```
### OS / Environment
Ubuntu 20.04, provisioning using Vagrant and the ansible_local provisioner.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
I'm behind a proxy so I have to use the following workaround to be able to download gpg keys:
```yaml
- name: Install wireshark-dev gpg key
ansible.builtin.apt_key:
data: "{{ lookup('url', 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xd875551314eca0f0', split_lines=False) }}"
state: present
register: pkg_result
until: pkg_result is success
- name: Adding wireshark-dev ppa repository
ansible.builtin.apt_repository:
repo: 'ppa:wireshark-dev/stable'
state: present
```
### Expected Results
ansible.builtin.apt_repository should not try to download the existing GPG key using apt-key.
However, it tries to download it and this fails because I'm behind a proxy.
I've found the following code to be the culprit in the apt_repository.py module:
```python
def _key_already_exists(self, key_fingerprint):
if self.apt_key_bin:
rc, out, err = self.module.run_command([self.apt_key_bin, 'export', key_fingerprint], check_rc=True)
found = len(err) == 0
```
The code checks for an empty stderr, but apt-key issues the following warning message on stderr:
> Warning: apt-key output should not be parsed (stdout is not a terminal)
When a key doesn't not exist locally, here is the content of stderr:
> Warning: apt-key output should not be parsed (stdout is not a terminal)
> gpg: WARNING: nothing exported
In version 2.12.10, the easiest way to patch it is to change line 456 to the following:
```python
return ("nothing exported" not in err)
```
In the current master branch, the easiest way to patch it is to change line 475 to the following:
```python
found = ("nothing exported" not in err)
```
### Actual Results
```console
36122 1674752689.12920: _low_level_execute_command() done: rc=1, stdout=rc=0, out=-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFUDPR4BEADzM3Hc8HzTOu9fypxMIOGtcUVTCGSSI0NgtdTp96HZGtuQlweB
inSEVSauBcPYvW3UTt0fbOotMt6rACFUiH0bs1y20rXuHgVFkQmSfTQT+qDaCYud
g2MV8RFObQ+/MQrjnSqpSiNoLagW4+x6whY9wIlT4gpHbZLZ74/4ZxMkKdDFFQyn
jonQHW1w4iZXoMUGeFUcR7koZo5UKK+3DmNaWP2oj9sHzlHeFpXoshiLY05uL0E0
Tzu3+NHTbjMl88BezfYYfMawgIJyaA9PIoDe+Z/72dwjWYWWjqCsKv2BqBMScp82
5Ewa2i1aeH2RQ/l8ipZzcKKqK5y5LfIb6AgwsDStFet+43Gt1F1GoaOckkxdOpal
Mg/sNGORVHVV3/kxfGzQyj2KakfbkaGCSGRnj8iarhLhxr9tR8QP2kdCOBWkmYXJ
esei7hfrpA5BbrkUEyQ4lSCnPW8tBXfjgdUjv0kcs/nTOTg7v4FbhzvK0+fKSnjL
Pe+4fH8NNQLC79/EAiQztKTHojKFVUQfydn5VQO1nJ/Fq0ulyuMk23ezddqbtvs4
5UXIrzJSro5nJnigKGRR9Was6M8wdnke8qhqWN5pCmO5txx1AIZdOkwh0UqaVnTF
Kt0kDW/PkDGzm7qYCYLqphXi9Z4Dg2y9zGOnjxpTt30T6M2kl/GV6RFoHQARAQAB
tBhMYXVuY2hwYWQgUFBBIGZvciBSRU1udXiJAjgEEwECACIFAlUDPR4CGwMGCwkI
BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEL/0UBZ4jeEVSMwQAN7pAtv1DzaXwaRL
N7qLKYheLGe5Y0IK15laxqnsBDqZTy4omLzvRPJ7UP3/EMLn3VSRtCZ2oZUGRfeu
UfW6Ms9tPHzrchXnqKM3oJInA4rvYeQ9NUwntLSY1S46/6NIDmY5t5XnR7gB/L4v
B616fFDYeaygum9BBvIS9VgEP0LFItA6H3eaVR3aWYNAjCIU6K6m9idhAodeo8HC
qgSEIDspXc9lL2rE1hTzUPrv8hpeFCmV6d/JwpSoCs8I7uZ81TjCRae7ydcq8g32
/F9zzHYMHJmCCmv2/zbi/yCroaEWhK7ouz0QtJPsSnytnrSFoTiGvHEd7LWIFxKs
itdV++Oz9IrxqSQ5Ww/+38U/SKOhN5zGl2H2nP8qLpjPyfIa/kNAXGkjHRuitWNA
6saTAFRSYGaitgQZmKzHrXQ4ZjFraMm6jfpwwGhqmIgZDjMWjqPSVV+LxnaaB5RW
n0EZczPCvkkierIy7pgTKeAeKVEkbLysnfwEQVBWxfziShbUWCQLnwXjY5kYStWF
1popjpX75KwbPEQlNgTS83QC20W19g2L+Z0r3NGEyz5hPGhAqbwN89/2sB0hUORK
VCu/lZu4kUIcOBGcSvQsqsBWIIewMPbxm7so/ba8je3WKDoPLqWDyIi5iTGy3WV8
Pw+Ns1dA9rRc3SmwLi01W390osKB
=jXnP
-----END PGP PUBLIC KEY BLOCK-----
, err=Warning: apt-key output should not be parsed (stdout is not a terminal)
{"cmd": "apt-key adv --recv-keys --no-tty --keyserver hkp://keyserver.ubuntu.com:80 E90F33EEF615660D25A02D32BFF45016788DE115", "rc": 2, "stdout": "Executing: /tmp/apt-key-gpghome.t2vhexoMJ0/gpg.1.sh --recv-keys --no-tty -
-keyserver hkp://keyserver.ubuntu.com:80 E90F33EEF615660D25A02D32BFF45016788DE115\n", "stderr": "Warning: apt-key output should not be parsed (stdout is not a terminal)\ngpg: keyserver receive failed: No name\n", "failed"
: true, "msg": "Warning: apt-key output should not be parsed (stdout is not a terminal)\ngpg: keyserver receive failed: No name", "invocation": {"module_args": {"repo": "ppa:remnux/stable", "state": "present", "update_cac
he": true, "update_cache_retries": 5, "update_cache_retry_max_delay": 12, "install_python_apt": true, "validate_certs": true, "mode": null, "filename": null, "codename": null}}}
, stderr=
36122 1674752689.12943: done with _execute_module (ansible.builtin.apt_repository, {'repo': 'ppa:remnux/stable', 'state': 'present', '_ansible_check_mode': False, '_ansible_no_log': False, '_ansible_debug': True, '_ansible_diff': False, '_ansible_verbosity': 4, '_ansible_version': '2.12.10', '_ansible_module_name': 'ansible.builtin.apt_repository', '_ansible_syslog_facility': 'LOG_USER', '_ansible_selinux_special_fs': ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat'], '_ansible_string_conversion_action': 'warn', '_ansible_socket': None, '_ansible_shell_executable': '/bin/sh', '_ansible_keep_remote_files': False, '_ansible_tmpdir': '/root/.ansible/tmp/ansible-tmp-1674752688.6440187-36122-146911848313704/', '_ansible_remote_tmp': '~/.ansible/tmp'})
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79825
|
https://github.com/ansible/ansible/pull/79827
|
ff3ee9c4bdac68909bcb769091a198a7c45e6350
|
ca604513dbd8f7db590399f031a12dec38cd90d3
| 2023-01-26T18:28:23Z |
python
| 2023-03-06T21:14:35Z |
lib/ansible/modules/apt_repository.py
|
# encoding: utf-8
# Copyright: (c) 2012, Matt Wright <[email protected]>
# Copyright: (c) 2013, Alexander Saltanov <[email protected]>
# Copyright: (c) 2014, Rutger Spiertz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_repository
short_description: Add and remove APT repositories
description:
- Add or remove an APT repositories in Ubuntu and Debian.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- This module supports Debian Squeeze (version 6) as well as its successors and derivatives.
options:
repo:
description:
- A source string for the repository.
type: str
required: true
state:
description:
- A source string state.
type: str
choices: [ absent, present ]
default: "present"
mode:
description:
- The octal mode for newly created files in sources.list.d.
- Default is what system uses (probably 0644).
type: raw
version_added: "1.6"
update_cache:
description:
- Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes.
type: bool
default: "yes"
aliases: [ update-cache ]
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
validate_certs:
description:
- If C(false), SSL certificates for the target repo will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
version_added: '1.8'
filename:
description:
- Sets the name of the source list file in sources.list.d.
Defaults to a file name based on the repository source url.
The .list extension will be automatically added.
type: str
version_added: '2.1'
codename:
description:
- Override the distribution codename to use for PPA repositories.
Should usually only be set when working with a PPA on
a non-Ubuntu target (for example, Debian or Mint).
type: str
version_added: '2.3'
install_python_apt:
description:
- Whether to automatically try to install the Python apt library or not, if it is not already installed.
Without this library, the module does not work.
- Runs C(apt-get install python-apt) for Python 2, and C(apt-get install python3-apt) for Python 3.
- Only works with the system Python 2 or Python 3. If you are using a Python on the remote that is not
the system Python, set I(install_python_apt=false) and ensure that the Python apt library
for your Python version is installed some other way.
type: bool
default: true
author:
- Alexander Saltanov (@sashka)
version_added: "0.7"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- apt-key or gpg
'''
EXAMPLES = '''
- name: Add specified repository into sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Add specified repository into sources list using specified filename
ansible.builtin.apt_repository:
repo: deb http://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
- name: Add source repository into sources list
ansible.builtin.apt_repository:
repo: deb-src http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Remove specified repository from sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: absent
- name: Add nginx stable repository from PPA and install its signing key on Ubuntu target
ansible.builtin.apt_repository:
repo: ppa:nginx/stable
- name: Add nginx stable repository from PPA and install its signing key on Debian target
ansible.builtin.apt_repository:
repo: 'ppa:nginx/stable'
codename: trusty
- name: One way to avoid apt_key once it is removed from your distro
block:
- name: somerepo |no apt key
ansible.builtin.get_url:
url: https://download.example.com/linux/ubuntu/gpg
dest: /etc/apt/trusted.gpg.d/somerepo.asc
- name: somerepo | apt source
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
'''
RETURN = '''
repo:
description: A source string for the repository
returned: always
type: str
sample: "deb https://artifacts.elastic.co/packages/6.x/apt stable main"
sources_added:
description: List of sources added
returned: success, sources were added
type: list
sample: ["/etc/apt/sources.list.d/artifacts_elastic_co_packages_6_x_apt.list"]
version_added: "2.15"
sources_removed:
description: List of sources removed
returned: success, sources were removed
type: list
sample: ["/etc/apt/sources.list.d/artifacts_elastic_co_packages_6_x_apt.list"]
version_added: "2.15"
'''
import copy
import glob
import json
import os
import re
import sys
import tempfile
import random
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_url
try:
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
except ImportError:
apt = apt_pkg = aptsources_distro = distro = None
HAVE_PYTHON_APT = False
APT_KEY_DIRS = ['/etc/apt/keyrings', '/etc/apt/trusted.gpg.d', '/usr/share/keyrings']
DEFAULT_SOURCES_PERM = 0o0644
VALID_SOURCE_TYPES = ('deb', 'deb-src')
def install_python_apt(module, apt_pkg_name):
if not module.check_mode:
apt_get_path = module.get_bin_path('apt-get')
if apt_get_path:
rc, so, se = module.run_command([apt_get_path, 'update'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
rc, so, se = module.run_command([apt_get_path, 'install', apt_pkg_name, '-y', '-q'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
else:
module.fail_json(msg="%s must be installed to use check mode" % apt_pkg_name)
class InvalidSource(Exception):
pass
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
class SourcesList(object):
def __init__(self, module):
self.module = module
self.files = {} # group sources by file
# Repositories that we're adding -- used to implement mode param
self.new_repos = set()
self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist')
# read sources.list if it exists
if os.path.isfile(self.default_file):
self.load(self.default_file)
# read sources.list.d
for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')):
self.load(file)
def __iter__(self):
'''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.'''
for file, sources in self.files.items():
for n, valid, enabled, source, comment in sources:
if valid:
yield file, n, enabled, source, comment
def _expand_path(self, filename):
if '/' in filename:
return filename
else:
return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename))
def _suggest_filename(self, line):
def _cleanup_filename(s):
filename = self.module.params['filename']
if filename is not None:
return filename
return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split())
def _strip_username_password(s):
if '@' in s:
s = s.split('@', 1)
s = s[-1]
return s
# Drop options and protocols.
line = re.sub(r'\[[^\]]+\]', '', line)
line = re.sub(r'\w+://', '', line)
# split line into valid keywords
parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES]
# Drop usernames and passwords
parts[0] = _strip_username_password(parts[0])
return '%s.list' % _cleanup_filename(' '.join(parts[:1]))
def _parse(self, line, raise_if_invalid_or_disabled=False):
valid = False
enabled = True
source = ''
comment = ''
line = line.strip()
if line.startswith('#'):
enabled = False
line = line[1:]
# Check for another "#" in the line and treat a part after it as a comment.
i = line.find('#')
if i > 0:
comment = line[i + 1:].strip()
line = line[:i]
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
source = line.strip()
if source:
chunks = source.split()
if chunks[0] in VALID_SOURCE_TYPES:
valid = True
source = ' '.join(chunks)
if raise_if_invalid_or_disabled and (not valid or not enabled):
raise InvalidSource(line)
return valid, enabled, source, comment
@staticmethod
def _apt_cfg_file(filespec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_file(filespec)
except AttributeError:
result = apt_pkg.Config.FindFile(filespec)
return result
@staticmethod
def _apt_cfg_dir(dirspec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_dir(dirspec)
except AttributeError:
result = apt_pkg.Config.FindDir(dirspec)
return result
def load(self, file):
group = []
f = open(file, 'r')
for n, line in enumerate(f):
valid, enabled, source, comment = self._parse(line)
group.append((n, valid, enabled, source, comment))
self.files[file] = group
def save(self):
for filename, sources in list(self.files.items()):
if sources:
d, fn = os.path.split(filename)
try:
os.makedirs(d)
except OSError as ex:
if not os.path.isdir(d):
self.module.fail_json("Failed to create directory %s: %s" % (d, to_native(ex)))
try:
fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d)
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to create temp file at "%s" for apt source: %s' % (d, to_native(e)))
f = os.fdopen(fd, 'w')
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
line = ''.join(chunks)
try:
f.write(line)
except IOError as ex:
self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, to_native(ex)))
self.module.atomic_move(tmp_path, filename)
# allow the user to override the default mode
if filename in self.new_repos:
this_mode = self.module.params.get('mode', DEFAULT_SOURCES_PERM)
self.module.set_mode_if_different(filename, this_mode, False)
else:
del self.files[filename]
if os.path.exists(filename):
os.remove(filename)
def dump(self):
dumpstruct = {}
for filename, sources in self.files.items():
if sources:
lines = []
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
lines.append(''.join(chunks))
dumpstruct[filename] = ''.join(lines)
return dumpstruct
def _choice(self, new, old):
if new is None:
return old
return new
def modify(self, file, n, enabled=None, source=None, comment=None):
'''
This function to be used with iterator, so we don't care of invalid sources.
If source, enabled, or comment is None, original value from line ``n`` will be preserved.
'''
valid, enabled_old, source_old, comment_old = self.files[file][n][1:]
self.files[file][n] = (n, valid, self._choice(enabled, enabled_old), self._choice(source, source_old), self._choice(comment, comment_old))
def _add_valid_source(self, source_new, comment_new, file):
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
self.module.log('ading source file: %s | %s | %s' % (source_new, comment_new, file))
found = False
for filename, n, enabled, source, comment in self:
if source == source_new:
self.modify(filename, n, enabled=True)
found = True
if not found:
if file is None:
file = self.default_file
else:
file = self._expand_path(file)
if file not in self.files:
self.files[file] = []
files = self.files[file]
files.append((len(files), True, True, source_new, comment_new))
self.new_repos.add(file)
def add_source(self, line, comment='', file=None):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
# Prefer separate files for new sources.
self._add_valid_source(source, comment, file=file or self._suggest_filename(source))
def _remove_valid_source(self, source):
# If we have more than one entry, we will remove them all (not comment, remove!)
for filename, n, enabled, src, comment in self:
if source == src and enabled:
self.files[filename].pop(n)
def remove_source(self, line):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
class UbuntuSourcesList(SourcesList):
LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s'
def __init__(self, module):
self.module = module
self.codename = module.params['codename'] or distro.codename
super(UbuntuSourcesList, self).__init__(module)
self.apt_key_bin = self.module.get_bin_path('apt-key', required=False)
self.gpg_bin = self.module.get_bin_path('gpg', required=False)
if not self.apt_key_bin and not self.gpg_bin:
self.module.fail_json(msg='Either apt-key or gpg binary is required, but neither could be found')
def __deepcopy__(self, memo=None):
return UbuntuSourcesList(self.module)
def _get_ppa_info(self, owner_name, ppa_name):
lp_api = self.LP_API % (owner_name, ppa_name)
headers = dict(Accept='application/json')
response, info = fetch_url(self.module, lp_api, headers=headers)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch PPA information, error was: %s" % info['msg'])
return json.loads(to_native(response.read()))
def _expand_ppa(self, path):
ppa = path.split(':')[1]
ppa_owner = ppa.split('/')[0]
try:
ppa_name = ppa.split('/')[1]
except IndexError:
ppa_name = 'ppa'
line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, self.codename)
return line, ppa_owner, ppa_name
def _key_already_exists(self, key_fingerprint):
if self.apt_key_bin:
rc, out, err = self.module.run_command([self.apt_key_bin, 'export', key_fingerprint], check_rc=True)
found = len(err) == 0
else:
found = self._gpg_key_exists(key_fingerprint)
return found
def _gpg_key_exists(self, key_fingerprint):
found = False
keyfiles = ['/etc/apt/trusted.gpg'] # main gpg repo for apt
for other_dir in APT_KEY_DIRS:
# add other known sources of gpg sigs for apt, skip hidden files
keyfiles.extend([os.path.join(other_dir, x) for x in os.listdir(other_dir) if not x.startswith('.')])
for key_file in keyfiles:
if os.path.exists(key_file):
try:
rc, out, err = self.module.run_command([self.gpg_bin, '--list-packets', key_file])
except (IOError, OSError) as e:
self.debug("Could check key against file %s: %s" % (key_file, to_native(e)))
continue
if key_fingerprint in out:
found = True
break
return found
# https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html
def add_source(self, line, comment='', file=None):
if line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(line)
if source in self.repos_urls:
# repository already exists
return
info = self._get_ppa_info(ppa_owner, ppa_name)
# add gpg sig if needed
if not self._key_already_exists(info['signing_key_fingerprint']):
# TODO: report file that would have been added if not check_mode
keyfile = ''
if not self.module.check_mode:
if self.apt_key_bin:
command = [self.apt_key_bin, 'adv', '--recv-keys', '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80',
info['signing_key_fingerprint']]
else:
# use first available key dir, in order of preference
for keydir in APT_KEY_DIRS:
if os.path.exists(keydir):
break
else:
self.module.fail_json("Unable to find any existing apt gpgp repo directories, tried the following: %s" % ', '.join(APT_KEY_DIRS))
keyfile = '%s/%s-%s-%s.gpg' % (keydir, os.path.basename(source).replace(' ', '-'), ppa_owner, ppa_name)
command = [self.gpg_bin, '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80', '--export', info['signing_key_fingerprint']]
rc, stdout, stderr = self.module.run_command(command, check_rc=True, encoding=None)
if keyfile:
# using gpg we must write keyfile ourselves
if len(stdout) == 0:
self.module.fail_json(msg='Unable to get required signing key', rc=rc, stderr=stderr, command=command)
try:
with open(keyfile, 'wb') as f:
f.write(stdout)
self.module.log('Added repo key "%s" for apt to file "%s"' % (info['signing_key_fingerprint'], keyfile))
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to add required signing key for%s ', rc=rc, stderr=stderr, error=to_native(e))
# apt source file
file = file or self._suggest_filename('%s_%s' % (line, self.codename))
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
file = file or self._suggest_filename(source)
self._add_valid_source(source, comment, file)
def remove_source(self, line):
if line.startswith('ppa:'):
source = self._expand_ppa(line)[0]
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
@property
def repos_urls(self):
_repositories = []
for parsed_repos in self.files.values():
for parsed_repo in parsed_repos:
valid = parsed_repo[1]
enabled = parsed_repo[2]
source_line = parsed_repo[3]
if not valid or not enabled:
continue
if source_line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(source_line)
_repositories.append(source)
else:
_repositories.append(source_line)
return _repositories
def revert_sources_list(sources_before, sources_after, sourceslist_before):
'''Revert the sourcelist files to their previous state.'''
# First remove any new files that were created:
for filename in set(sources_after.keys()).difference(sources_before.keys()):
if os.path.exists(filename):
os.remove(filename)
# Now revert the existing files to their former state:
sourceslist_before.save()
def main():
module = AnsibleModule(
argument_spec=dict(
repo=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
mode=dict(type='raw'),
update_cache=dict(type='bool', default=True, aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
filename=dict(type='str'),
# This should not be needed, but exists as a failsafe
install_python_apt=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
codename=dict(type='str'),
),
supports_check_mode=True,
)
params = module.params
repo = module.params['repo']
state = module.params['state']
update_cache = module.params['update_cache']
# Note: mode is referenced in SourcesList class via the passed in module (self here)
sourceslist = None
if not HAVE_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
if params['install_python_apt']:
install_python_apt(module, apt_pkg_name)
else:
module.fail_json(msg='%s is not installed, and install_python_apt is False' % apt_pkg_name)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
if not repo:
module.fail_json(msg='Please set argument \'repo\' to a non-empty value')
if isinstance(distro, aptsources_distro.Distribution):
sourceslist = UbuntuSourcesList(module)
else:
module.fail_json(msg='Module apt_repository is not supported on target.')
sourceslist_before = copy.deepcopy(sourceslist)
sources_before = sourceslist.dump()
try:
if state == 'present':
sourceslist.add_source(repo)
elif state == 'absent':
sourceslist.remove_source(repo)
except InvalidSource as ex:
module.fail_json(msg='Invalid repository string: %s' % to_native(ex))
sources_after = sourceslist.dump()
changed = sources_before != sources_after
diff = []
sources_added = set()
sources_removed = set()
if changed:
sources_added = set(sources_after.keys()).difference(sources_before.keys())
sources_removed = set(sources_before.keys()).difference(sources_after.keys())
if module._diff:
for filename in set(sources_added.union(sources_removed)):
diff.append({'before': sources_before.get(filename, ''),
'after': sources_after.get(filename, ''),
'before_header': (filename, '/dev/null')[filename not in sources_before],
'after_header': (filename, '/dev/null')[filename not in sources_after]})
if changed and not module.check_mode:
try:
sourceslist.save()
if update_cache:
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache = apt.Cache()
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff with a max fail count, plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
except (OSError, IOError) as ex:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg=to_native(ex))
module.exit_json(changed=changed, repo=repo, sources_added=sources_added, sources_removed=sources_removed, state=state, diff=diff)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,638 |
inventory variable replace failed
|
### Summary
Variables in inventory after version 2.13. x can not be replaced with actual values when execute task, because of this MR: https://github.com/ansible/ansible/pull/76590
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
expected result:

Actual Results:

### Issue Type
Bug Report
### Component Name
lib/ansible/executor/task_executor.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.9/site-packages/ansible
ansible collection location = /home/xxx/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.9.9 (main, Sep 21 2022, 09:00:34) [GCC 10.3.1]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = memory
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_NO_TARGET_SYSLOG(/etc/ansible/ansible.cfg) = True
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 60
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = paramiko
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_legacy_silent
PARAMIKO_HOST_KEY_AUTO_ADD(/etc/ansible/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = True
SYSTEM_WARNINGS(/etc/ansible/ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_auto_add(/etc/ansible/ansible.cfg) = True
host_key_checking(/etc/ansible/ansible.cfg) = False
look_for_keys(/etc/ansible/ansible.cfg) = False
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
scp_if_ssh(/etc/ansible/ansible.cfg) = smart
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
timeout(/etc/ansible/ansible.cfg) = 60
SHELL:
=====
sh:
__
remote_tmp(/etc/ansible/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
EulerOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
any
```
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
### Expected Results
ssh normal:

### Actual Results
```console
version after 2.13.x:
ansible_ssh_host can not be replaced with actual IPοΌssh {{primaryIp}} or {{standbyip}}, so failed with:
<{{standbyIp}}> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{user_name}} on PORT 22 TO {{standbyIp}}
fatal: [primary]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
Traceback (most recent call last):
File "/opt/ansible/lib64/python3.9/site-packages/ansible/plugins/connection/paramiko_ssh.py", line 345, in _connect_uncached
ssh.connect(
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 340, in connect
to_try = list(self._families_and_addresses(hostname, port))
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 203, in _families_and_addresses
addrinfos = socket.getaddrinfo(
File "/usr/lib64/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
fatal: [standby]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79638
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-12-30T03:18:46Z |
python
| 2023-03-07T16:09:14Z |
changelogs/fragments/paramiko_config.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,638 |
inventory variable replace failed
|
### Summary
Variables in inventory after version 2.13. x can not be replaced with actual values when execute task, because of this MR: https://github.com/ansible/ansible/pull/76590
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
expected result:

Actual Results:

### Issue Type
Bug Report
### Component Name
lib/ansible/executor/task_executor.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.9/site-packages/ansible
ansible collection location = /home/xxx/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.9.9 (main, Sep 21 2022, 09:00:34) [GCC 10.3.1]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = memory
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_NO_TARGET_SYSLOG(/etc/ansible/ansible.cfg) = True
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 60
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = paramiko
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_legacy_silent
PARAMIKO_HOST_KEY_AUTO_ADD(/etc/ansible/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = True
SYSTEM_WARNINGS(/etc/ansible/ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_auto_add(/etc/ansible/ansible.cfg) = True
host_key_checking(/etc/ansible/ansible.cfg) = False
look_for_keys(/etc/ansible/ansible.cfg) = False
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
scp_if_ssh(/etc/ansible/ansible.cfg) = smart
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
timeout(/etc/ansible/ansible.cfg) = 60
SHELL:
=====
sh:
__
remote_tmp(/etc/ansible/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
EulerOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
any
```
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
### Expected Results
ssh normal:

### Actual Results
```console
version after 2.13.x:
ansible_ssh_host can not be replaced with actual IPοΌssh {{primaryIp}} or {{standbyip}}, so failed with:
<{{standbyIp}}> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{user_name}} on PORT 22 TO {{standbyIp}}
fatal: [primary]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
Traceback (most recent call last):
File "/opt/ansible/lib64/python3.9/site-packages/ansible/plugins/connection/paramiko_ssh.py", line 345, in _connect_uncached
ssh.connect(
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 340, in connect
to_try = list(self._families_and_addresses(hostname, port))
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 203, in _families_and_addresses
addrinfos = socket.getaddrinfo(
File "/usr/lib64/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
fatal: [standby]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79638
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-12-30T03:18:46Z |
python
| 2023-03-07T16:09:14Z |
lib/ansible/plugins/connection/paramiko_ssh.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: paramiko
short_description: Run tasks via python ssh (paramiko)
description:
- Use the python ssh implementation (Paramiko) to connect to targets
- The paramiko transport is provided because many distributions, in particular EL6 and before do not support ControlPersist
in their SSH implementations.
- This is needed on the Ansible control machine to be reasonably efficient with connections.
Thus paramiko is faster for most users on these platforms.
Users with ControlPersist capability can consider using -c ssh or configuring the transport in the configuration file.
- This plugin also borrows a lot of settings from the ssh plugin as they both cover the same protocol.
version_added: "0.1"
options:
remote_addr:
description:
- Address of the remote target
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: ansible_paramiko_host
remote_user:
description:
- User to login/authenticate as
- Can be set from the CLI via the C(--user) or C(-u) options.
vars:
- name: ansible_user
- name: ansible_ssh_user
- name: ansible_paramiko_user
env:
- name: ANSIBLE_REMOTE_USER
- name: ANSIBLE_PARAMIKO_REMOTE_USER
version_added: '2.5'
ini:
- section: defaults
key: remote_user
- section: paramiko_connection
key: remote_user
version_added: '2.5'
keyword:
- name: remote_user
password:
description:
- Secret used to either login the ssh server or as a passphrase for ssh keys that require it
- Can be set from the CLI via the C(--ask-pass) option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
- name: ansible_paramiko_pass
- name: ansible_paramiko_password
version_added: '2.5'
use_rsa_sha2_algorithms:
description:
- Whether or not to enable RSA SHA2 algorithms for pubkeys and hostkeys
- On paramiko versions older than 2.9, this only affects hostkeys
- For behavior matching paramiko<2.9 set this to C(False)
vars:
- name: ansible_paramiko_use_rsa_sha2_algorithms
ini:
- {key: use_rsa_sha2_algorithms, section: paramiko_connection}
env:
- {name: ANSIBLE_PARAMIKO_USE_RSA_SHA2_ALGORITHMS}
default: True
type: boolean
version_added: '2.14'
host_key_auto_add:
description: 'Automatically add host keys'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
look_for_keys:
default: True
description: 'False to disable searching for private key files in ~/.ssh/'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
proxy_command:
default: ''
description:
- Proxy information for running the connection via a jumphost
- Also this plugin will scan 'ssh_args', 'ssh_extra_args' and 'ssh_common_args' from the 'ssh' plugin settings for proxy information if set.
env: [{name: ANSIBLE_PARAMIKO_PROXY_COMMAND}]
ini:
- {key: proxy_command, section: paramiko_connection}
vars:
- name: ansible_paramiko_proxy_command
version_added: '2.15'
ssh_args:
description: Only used in parsing ProxyCommand for use in this plugin.
default: ''
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
ssh_common_args:
description: Only used in parsing ProxyCommand for use in this plugin.
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
default: ''
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
ssh_extra_args:
description: Only used in parsing ProxyCommand for use in this plugin.
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
default: ''
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
pty:
default: True
description: 'SUDO usually requires a PTY, True to give a PTY and False to not give a PTY.'
env:
- name: ANSIBLE_PARAMIKO_PTY
ini:
- section: paramiko_connection
key: pty
type: boolean
record_host_keys:
default: True
description: 'Save the host keys to a file'
env: [{name: ANSIBLE_PARAMIKO_RECORD_HOST_KEYS}]
ini:
- section: paramiko_connection
key: record_host_keys
type: boolean
host_key_checking:
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
type: boolean
default: True
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
- name: ANSIBLE_PARAMIKO_HOST_KEY_CHECKING
version_added: '2.5'
ini:
- section: defaults
key: host_key_checking
- section: paramiko_connection
key: host_key_checking
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
- name: ansible_paramiko_host_key_checking
version_added: '2.5'
use_persistent_connections:
description: 'Toggles the use of persistence for connections'
type: boolean
default: False
env:
- name: ANSIBLE_USE_PERSISTENT_CONNECTIONS
ini:
- section: defaults
key: use_persistent_connections
banner_timeout:
type: float
default: 30
version_added: '2.14'
description:
- Configures, in seconds, the amount of time to wait for the SSH
banner to be presented. This option is supported by paramiko
version 1.15.0 or newer.
ini:
- section: paramiko_connection
key: banner_timeout
env:
- name: ANSIBLE_PARAMIKO_BANNER_TIMEOUT
# TODO:
#timeout=self._play_context.timeout,
"""
import os
import socket
import tempfile
import traceback
import fcntl
import re
from ansible.module_utils.compat.version import LooseVersion
from binascii import hexlify
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.module_utils.compat.paramiko import PARAMIKO_IMPORT_ERR, paramiko
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe
from ansible.module_utils._text import to_bytes, to_native, to_text
display = Display()
AUTHENTICITY_MSG = """
paramiko: The authenticity of host '%s' can't be established.
The %s key fingerprint is %s.
Are you sure you want to continue connecting (yes/no)?
"""
# SSH Options Regex
SETTINGS_REGEX = re.compile(r'(\w+)(?:\s*=\s*|\s+)(.+)')
class MyAddPolicy(object):
"""
Based on AutoAddPolicy in paramiko so we can determine when keys are added
and also prompt for input.
Policy for automatically adding the hostname and new host key to the
local L{HostKeys} object, and saving it. This is used by L{SSHClient}.
"""
def __init__(self, connection):
self.connection = connection
self._options = connection._options
def missing_host_key(self, client, hostname, key):
if all((self._options['host_key_checking'], not self._options['host_key_auto_add'])):
fingerprint = hexlify(key.get_fingerprint())
ktype = key.get_name()
if self.connection.get_option('use_persistent_connections') or self.connection.force_persistence:
# don't print the prompt string since the user cannot respond
# to the question anyway
raise AnsibleError(AUTHENTICITY_MSG[1:92] % (hostname, ktype, fingerprint))
inp = to_text(
display.prompt_until(AUTHENTICITY_MSG % (hostname, ktype, fingerprint), private=False),
errors='surrogate_or_strict'
)
if inp not in ['yes', 'y', '']:
raise AnsibleError("host connection rejected by user")
key._added_by_ansible_this_time = True
# existing implementation below:
client._host_keys.add(hostname, key.get_name(), key)
# host keys are actually saved in close() function below
# in order to control ordering.
# keep connection objects on a per host basis to avoid repeated attempts to reconnect
SSH_CONNECTION_CACHE = {} # type: dict[str, paramiko.client.SSHClient]
SFTP_CONNECTION_CACHE = {} # type: dict[str, paramiko.sftp_client.SFTPClient]
class Connection(ConnectionBase):
''' SSH based connections with Paramiko '''
transport = 'paramiko'
_log_channel = None
def _cache_key(self):
return "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user)
def _connect(self):
cache_key = self._cache_key()
if cache_key in SSH_CONNECTION_CACHE:
self.ssh = SSH_CONNECTION_CACHE[cache_key]
else:
self.ssh = SSH_CONNECTION_CACHE[cache_key] = self._connect_uncached()
self._connected = True
return self
def _set_log_channel(self, name):
'''Mimic paramiko.SSHClient.set_log_channel'''
self._log_channel = name
def _parse_proxy_command(self, port=22):
proxy_command = None
# Parse ansible_ssh_common_args, specifically looking for ProxyCommand
ssh_args = [
self.get_option('ssh_extra_args'),
self.get_option('ssh_common_args'),
self.get_option('ssh_args', ''),
]
args = self._split_ssh_args(' '.join(ssh_args))
for i, arg in enumerate(args):
if arg.lower() == 'proxycommand':
# _split_ssh_args split ProxyCommand from the command itself
proxy_command = args[i + 1]
else:
# ProxyCommand and the command itself are a single string
match = SETTINGS_REGEX.match(arg)
if match:
if match.group(1).lower() == 'proxycommand':
proxy_command = match.group(2)
if proxy_command:
break
proxy_command = self.get_option('proxy_command') or proxy_command
sock_kwarg = {}
if proxy_command:
replacers = {
'%h': self._play_context.remote_addr,
'%p': port,
'%r': self._play_context.remote_user
}
for find, replace in replacers.items():
proxy_command = proxy_command.replace(find, str(replace))
try:
sock_kwarg = {'sock': paramiko.ProxyCommand(proxy_command)}
display.vvv("CONFIGURE PROXY COMMAND FOR CONNECTION: %s" % proxy_command, host=self._play_context.remote_addr)
except AttributeError:
display.warning('Paramiko ProxyCommand support unavailable. '
'Please upgrade to Paramiko 1.9.0 or newer. '
'Not using configured ProxyCommand')
return sock_kwarg
def _connect_uncached(self):
''' activates the connection object '''
if paramiko is None:
raise AnsibleError("paramiko is not installed: %s" % to_native(PARAMIKO_IMPORT_ERR))
port = self._play_context.port or 22
display.vvv("ESTABLISH PARAMIKO SSH CONNECTION FOR USER: %s on PORT %s TO %s" % (self._play_context.remote_user, port, self._play_context.remote_addr),
host=self._play_context.remote_addr)
ssh = paramiko.SSHClient()
# Set pubkey and hostkey algorithms to disable, the only manipulation allowed currently
# is keeping or omitting rsa-sha2 algorithms
paramiko_preferred_pubkeys = getattr(paramiko.Transport, '_preferred_pubkeys', ())
paramiko_preferred_hostkeys = getattr(paramiko.Transport, '_preferred_keys', ())
use_rsa_sha2_algorithms = self.get_option('use_rsa_sha2_algorithms')
disabled_algorithms = {}
if not use_rsa_sha2_algorithms:
if paramiko_preferred_pubkeys:
disabled_algorithms['pubkeys'] = tuple(a for a in paramiko_preferred_pubkeys if 'rsa-sha2' in a)
if paramiko_preferred_hostkeys:
disabled_algorithms['keys'] = tuple(a for a in paramiko_preferred_hostkeys if 'rsa-sha2' in a)
# override paramiko's default logger name
if self._log_channel is not None:
ssh.set_log_channel(self._log_channel)
self.keyfile = os.path.expanduser("~/.ssh/known_hosts")
if self.get_option('host_key_checking'):
for ssh_known_hosts in ("/etc/ssh/ssh_known_hosts", "/etc/openssh/ssh_known_hosts"):
try:
# TODO: check if we need to look at several possible locations, possible for loop
ssh.load_system_host_keys(ssh_known_hosts)
break
except IOError:
pass # file was not found, but not required to function
ssh.load_system_host_keys()
ssh_connect_kwargs = self._parse_proxy_command(port)
ssh.set_missing_host_key_policy(MyAddPolicy(self))
conn_password = self.get_option('password') or self._play_context.password
allow_agent = True
if conn_password is not None:
allow_agent = False
try:
key_filename = None
if self._play_context.private_key_file:
key_filename = os.path.expanduser(self._play_context.private_key_file)
# paramiko 2.2 introduced auth_timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('2.2.0'):
ssh_connect_kwargs['auth_timeout'] = self._play_context.timeout
# paramiko 1.15 introduced banner timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('1.15.0'):
ssh_connect_kwargs['banner_timeout'] = self.get_option('banner_timeout')
ssh.connect(
self._play_context.remote_addr.lower(),
username=self._play_context.remote_user,
allow_agent=allow_agent,
look_for_keys=self.get_option('look_for_keys'),
key_filename=key_filename,
password=conn_password,
timeout=self._play_context.timeout,
port=port,
disabled_algorithms=disabled_algorithms,
**ssh_connect_kwargs,
)
except paramiko.ssh_exception.BadHostKeyException as e:
raise AnsibleConnectionFailure('host key mismatch for %s' % e.hostname)
except paramiko.ssh_exception.AuthenticationException as e:
msg = 'Failed to authenticate: {0}'.format(to_text(e))
raise AnsibleAuthenticationFailure(msg)
except Exception as e:
msg = to_text(e)
if u"PID check failed" in msg:
raise AnsibleError("paramiko version issue, please upgrade paramiko on the machine running ansible")
elif u"Private key file is encrypted" in msg:
msg = 'ssh %s@%s:%s : %s\nTo connect as a different user, use -u <username>.' % (
self._play_context.remote_user, self._play_context.remote_addr, port, msg)
raise AnsibleConnectionFailure(msg)
else:
raise AnsibleConnectionFailure(msg)
return ssh
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
bufsize = 4096
try:
self.ssh.get_transport().set_keepalive(5)
chan = self.ssh.get_transport().open_session()
except Exception as e:
text_e = to_text(e)
msg = u"Failed to open session"
if text_e:
msg += u": %s" % text_e
raise AnsibleConnectionFailure(to_native(msg))
# sudo usually requires a PTY (cf. requiretty option), therefore
# we give it one by default (pty=True in ansible.cfg), and we try
# to initialise from the calling environment when sudoable is enabled
if self.get_option('pty') and sudoable:
chan.get_pty(term=os.getenv('TERM', 'vt100'), width=int(os.getenv('COLUMNS', 0)), height=int(os.getenv('LINES', 0)))
display.vvv("EXEC %s" % cmd, host=self._play_context.remote_addr)
cmd = to_bytes(cmd, errors='surrogate_or_strict')
no_prompt_out = b''
no_prompt_err = b''
become_output = b''
try:
chan.exec_command(cmd)
if self.become and self.become.expect_prompt():
passprompt = False
become_sucess = False
while not (become_sucess or passprompt):
display.debug('Waiting for Privilege Escalation input')
chunk = chan.recv(bufsize)
display.debug("chunk is: %s" % chunk)
if not chunk:
if b'unknown user' in become_output:
n_become_user = to_native(self.become.get_option('become_user',
playcontext=self._play_context))
raise AnsibleError('user %s does not exist' % n_become_user)
else:
break
# raise AnsibleError('ssh connection closed waiting for password prompt')
become_output += chunk
# need to check every line because we might get lectured
# and we might get the middle of a line in a chunk
for l in become_output.splitlines(True):
if self.become.check_success(l):
become_sucess = True
break
elif self.become.check_password_prompt(l):
passprompt = True
break
if passprompt:
if self.become:
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
chan.sendall(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
else:
raise AnsibleError("A password is required but none was supplied")
else:
no_prompt_out += become_output
no_prompt_err += become_output
except socket.timeout:
raise AnsibleError('ssh timed out waiting for privilege escalation.\n' + become_output)
stdout = b''.join(chan.makefile('rb', bufsize))
stderr = b''.join(chan.makefile_stderr('rb', bufsize))
return (chan.recv_exit_status(), no_prompt_out + stdout, no_prompt_out + stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
self.sftp = self.ssh.open_sftp()
except Exception as e:
raise AnsibleError("failed to open a SFTP connection (%s)" % e)
try:
self.sftp.put(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except IOError:
raise AnsibleError("failed to transfer file to %s" % out_path)
def _connect_sftp(self):
cache_key = "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user)
if cache_key in SFTP_CONNECTION_CACHE:
return SFTP_CONNECTION_CACHE[cache_key]
else:
result = SFTP_CONNECTION_CACHE[cache_key] = self._connect().ssh.open_sftp()
return result
def fetch_file(self, in_path, out_path):
''' save a remote file to the specified path '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
try:
self.sftp = self._connect_sftp()
except Exception as e:
raise AnsibleError("failed to open a SFTP connection (%s)" % to_native(e))
try:
self.sftp.get(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except IOError:
raise AnsibleError("failed to transfer file from %s" % in_path)
def _any_keys_added(self):
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
return True
return False
def _save_ssh_host_keys(self, filename):
'''
not using the paramiko save_ssh_host_keys function as we want to add new SSH keys at the bottom so folks
don't complain about it :)
'''
if not self._any_keys_added():
return False
path = os.path.expanduser("~/.ssh")
makedirs_safe(path)
with open(filename, 'w') as f:
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
# was f.write
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if not added_this_time:
f.write("%s %s %s\n" % (hostname, keytype, key.get_base64()))
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
f.write("%s %s %s\n" % (hostname, keytype, key.get_base64()))
def reset(self):
if not self._connected:
return
self.close()
self._connect()
def close(self):
''' terminate the connection '''
cache_key = self._cache_key()
SSH_CONNECTION_CACHE.pop(cache_key, None)
SFTP_CONNECTION_CACHE.pop(cache_key, None)
if hasattr(self, 'sftp'):
if self.sftp is not None:
self.sftp.close()
if self.get_option('host_key_checking') and self.get_option('record_host_keys') and self._any_keys_added():
# add any new SSH host keys -- warning -- this could be slow
# (This doesn't acquire the connection lock because it needs
# to exclude only other known_hosts writers, not connections
# that are starting up.)
lockfile = self.keyfile.replace("known_hosts", ".known_hosts.lock")
dirname = os.path.dirname(self.keyfile)
makedirs_safe(dirname)
KEY_LOCK = open(lockfile, 'w')
fcntl.lockf(KEY_LOCK, fcntl.LOCK_EX)
try:
# just in case any were added recently
self.ssh.load_system_host_keys()
self.ssh._host_keys.update(self.ssh._system_host_keys)
# gather information about the current key file, so
# we can ensure the new file has the correct mode/owner
key_dir = os.path.dirname(self.keyfile)
if os.path.exists(self.keyfile):
key_stat = os.stat(self.keyfile)
mode = key_stat.st_mode
uid = key_stat.st_uid
gid = key_stat.st_gid
else:
mode = 33188
uid = os.getuid()
gid = os.getgid()
# Save the new keys to a temporary file and move it into place
# rather than rewriting the file. We set delete=False because
# the file will be moved into place rather than cleaned up.
tmp_keyfile = tempfile.NamedTemporaryFile(dir=key_dir, delete=False)
os.chmod(tmp_keyfile.name, mode & 0o7777)
os.chown(tmp_keyfile.name, uid, gid)
self._save_ssh_host_keys(tmp_keyfile.name)
tmp_keyfile.close()
os.rename(tmp_keyfile.name, self.keyfile)
except Exception:
# unable to save keys, including scenario when key was invalid
# and caught earlier
traceback.print_exc()
fcntl.lockf(KEY_LOCK, fcntl.LOCK_UN)
self.ssh.close()
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,638 |
inventory variable replace failed
|
### Summary
Variables in inventory after version 2.13. x can not be replaced with actual values when execute task, because of this MR: https://github.com/ansible/ansible/pull/76590
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
expected result:

Actual Results:

### Issue Type
Bug Report
### Component Name
lib/ansible/executor/task_executor.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib64/python3.9/site-packages/ansible
ansible collection location = /home/xxx/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible
python version = 3.9.9 (main, Sep 21 2022, 09:00:34) [GCC 10.3.1]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = memory
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_MANAGED_STR(/etc/ansible/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
DEFAULT_NO_TARGET_SYSLOG(/etc/ansible/ansible.cfg) = True
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 60
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = paramiko
DEPRECATION_WARNINGS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = auto_legacy_silent
PARAMIKO_HOST_KEY_AUTO_ADD(/etc/ansible/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = True
SYSTEM_WARNINGS(/etc/ansible/ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_auto_add(/etc/ansible/ansible.cfg) = True
host_key_checking(/etc/ansible/ansible.cfg) = False
look_for_keys(/etc/ansible/ansible.cfg) = False
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
ssh:
___
host_key_checking(/etc/ansible/ansible.cfg) = False
pipelining(/etc/ansible/ansible.cfg) = True
scp_if_ssh(/etc/ansible/ansible.cfg) = smart
ssh_args(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=1800s
timeout(/etc/ansible/ansible.cfg) = 60
SHELL:
=====
sh:
__
remote_tmp(/etc/ansible/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
EulerOS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
any
```
configοΌ
primaryIp: 'aa.aa.aa.aa'
standbyIp: 'bb.bb.bb.bb'
user_name: xx
inventory:
[xx]
primary ansible_ssh_user={{user_name}} ansible_ssh_host={{primaryIp}}
standby ansible_ssh_user={{user_name}} ansible_ssh_host={{standbyIp}}
### Expected Results
ssh normal:

### Actual Results
```console
version after 2.13.x:
ansible_ssh_host can not be replaced with actual IPοΌssh {{primaryIp}} or {{standbyip}}, so failed with:
<{{standbyIp}}> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{user_name}} on PORT 22 TO {{standbyIp}}
fatal: [primary]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
Traceback (most recent call last):
File "/opt/ansible/lib64/python3.9/site-packages/ansible/plugins/connection/paramiko_ssh.py", line 345, in _connect_uncached
ssh.connect(
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 340, in connect
to_try = list(self._families_and_addresses(hostname, port))
File "/opt/ansible/lib64/python3.9/site-packages/paramiko/client.py", line 203, in _families_and_addresses
addrinfos = socket.getaddrinfo(
File "/usr/lib64/python3.9/socket.py", line 954, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
fatal: [standby]: UNREACHABLE! => {
"changed": false,
"msg": "[Errno -2] Name or service not known",
"unreachable": true
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79638
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-12-30T03:18:46Z |
python
| 2023-03-07T16:09:14Z |
test/integration/targets/connection_paramiko_ssh/test_connection.inventory
|
[paramiko_ssh]
paramiko_ssh-pipelining ansible_ssh_pipelining=true
paramiko_ssh-no-pipelining ansible_ssh_pipelining=false
[paramiko_ssh:vars]
ansible_host=localhost
ansible_connection=paramiko_ssh
ansible_python_interpreter="{{ ansible_playbook_python }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,509 |
Jinja expressions are not evaluated in credential variables if paramiko plugin is used
|
### Summary
I use paramiko ssh plugin. I try to set `ansible_ssh_user`, `ansible_ssh_pass`, etc. from jinja expression. This has worked correctly in version 2.10, jinja expressions were evaluated. But in ansible-core 2.13 jinja expressions are not evaluated:
`ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx`
### Issue Type
Bug Report
### Component Name
ansible-playbook,paramiko
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zkrakko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/zkrakko/ansible-venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/zkrakko/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zkrakko/ansible-venv/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux Mint 20.3
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yaml file:
```yaml (paste below)
---
- hosts: server
gather_facts: no
vars:
ansible_ssh_user: "{{ some_variable | default('cirros') }}"
ansible_ss_pass: gocubsgo
tasks:
- raw: echo "{{ ansible_ssh_user }}"
```
hosts file:
```
[server]
10.xx.xx.xx
```
command:
`ansible-playbook -vvv -i hosts -c paramiko test.yaml`
### Expected Results
I expected that `cirros` username is used for connection (like in ansible-core 2.10):
```
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: cirros on PORT 22 TO 10.xx.xx.xx
```
### Actual Results
```console
Instead, the jinja expression in username was not evaluated:
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78509
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-08-11T07:21:52Z |
python
| 2023-03-07T16:09:14Z |
changelogs/fragments/paramiko_config.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,509 |
Jinja expressions are not evaluated in credential variables if paramiko plugin is used
|
### Summary
I use paramiko ssh plugin. I try to set `ansible_ssh_user`, `ansible_ssh_pass`, etc. from jinja expression. This has worked correctly in version 2.10, jinja expressions were evaluated. But in ansible-core 2.13 jinja expressions are not evaluated:
`ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx`
### Issue Type
Bug Report
### Component Name
ansible-playbook,paramiko
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zkrakko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/zkrakko/ansible-venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/zkrakko/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zkrakko/ansible-venv/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux Mint 20.3
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yaml file:
```yaml (paste below)
---
- hosts: server
gather_facts: no
vars:
ansible_ssh_user: "{{ some_variable | default('cirros') }}"
ansible_ss_pass: gocubsgo
tasks:
- raw: echo "{{ ansible_ssh_user }}"
```
hosts file:
```
[server]
10.xx.xx.xx
```
command:
`ansible-playbook -vvv -i hosts -c paramiko test.yaml`
### Expected Results
I expected that `cirros` username is used for connection (like in ansible-core 2.10):
```
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: cirros on PORT 22 TO 10.xx.xx.xx
```
### Actual Results
```console
Instead, the jinja expression in username was not evaluated:
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78509
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-08-11T07:21:52Z |
python
| 2023-03-07T16:09:14Z |
lib/ansible/plugins/connection/paramiko_ssh.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: paramiko
short_description: Run tasks via python ssh (paramiko)
description:
- Use the python ssh implementation (Paramiko) to connect to targets
- The paramiko transport is provided because many distributions, in particular EL6 and before do not support ControlPersist
in their SSH implementations.
- This is needed on the Ansible control machine to be reasonably efficient with connections.
Thus paramiko is faster for most users on these platforms.
Users with ControlPersist capability can consider using -c ssh or configuring the transport in the configuration file.
- This plugin also borrows a lot of settings from the ssh plugin as they both cover the same protocol.
version_added: "0.1"
options:
remote_addr:
description:
- Address of the remote target
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: ansible_paramiko_host
remote_user:
description:
- User to login/authenticate as
- Can be set from the CLI via the C(--user) or C(-u) options.
vars:
- name: ansible_user
- name: ansible_ssh_user
- name: ansible_paramiko_user
env:
- name: ANSIBLE_REMOTE_USER
- name: ANSIBLE_PARAMIKO_REMOTE_USER
version_added: '2.5'
ini:
- section: defaults
key: remote_user
- section: paramiko_connection
key: remote_user
version_added: '2.5'
keyword:
- name: remote_user
password:
description:
- Secret used to either login the ssh server or as a passphrase for ssh keys that require it
- Can be set from the CLI via the C(--ask-pass) option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
- name: ansible_paramiko_pass
- name: ansible_paramiko_password
version_added: '2.5'
use_rsa_sha2_algorithms:
description:
- Whether or not to enable RSA SHA2 algorithms for pubkeys and hostkeys
- On paramiko versions older than 2.9, this only affects hostkeys
- For behavior matching paramiko<2.9 set this to C(False)
vars:
- name: ansible_paramiko_use_rsa_sha2_algorithms
ini:
- {key: use_rsa_sha2_algorithms, section: paramiko_connection}
env:
- {name: ANSIBLE_PARAMIKO_USE_RSA_SHA2_ALGORITHMS}
default: True
type: boolean
version_added: '2.14'
host_key_auto_add:
description: 'Automatically add host keys'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
look_for_keys:
default: True
description: 'False to disable searching for private key files in ~/.ssh/'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
proxy_command:
default: ''
description:
- Proxy information for running the connection via a jumphost
- Also this plugin will scan 'ssh_args', 'ssh_extra_args' and 'ssh_common_args' from the 'ssh' plugin settings for proxy information if set.
env: [{name: ANSIBLE_PARAMIKO_PROXY_COMMAND}]
ini:
- {key: proxy_command, section: paramiko_connection}
vars:
- name: ansible_paramiko_proxy_command
version_added: '2.15'
ssh_args:
description: Only used in parsing ProxyCommand for use in this plugin.
default: ''
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
ssh_common_args:
description: Only used in parsing ProxyCommand for use in this plugin.
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
default: ''
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
ssh_extra_args:
description: Only used in parsing ProxyCommand for use in this plugin.
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
default: ''
deprecated:
why: In favor of the "proxy_command" option.
version: "2.18"
alternatives: proxy_command
pty:
default: True
description: 'SUDO usually requires a PTY, True to give a PTY and False to not give a PTY.'
env:
- name: ANSIBLE_PARAMIKO_PTY
ini:
- section: paramiko_connection
key: pty
type: boolean
record_host_keys:
default: True
description: 'Save the host keys to a file'
env: [{name: ANSIBLE_PARAMIKO_RECORD_HOST_KEYS}]
ini:
- section: paramiko_connection
key: record_host_keys
type: boolean
host_key_checking:
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
type: boolean
default: True
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
- name: ANSIBLE_PARAMIKO_HOST_KEY_CHECKING
version_added: '2.5'
ini:
- section: defaults
key: host_key_checking
- section: paramiko_connection
key: host_key_checking
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
- name: ansible_paramiko_host_key_checking
version_added: '2.5'
use_persistent_connections:
description: 'Toggles the use of persistence for connections'
type: boolean
default: False
env:
- name: ANSIBLE_USE_PERSISTENT_CONNECTIONS
ini:
- section: defaults
key: use_persistent_connections
banner_timeout:
type: float
default: 30
version_added: '2.14'
description:
- Configures, in seconds, the amount of time to wait for the SSH
banner to be presented. This option is supported by paramiko
version 1.15.0 or newer.
ini:
- section: paramiko_connection
key: banner_timeout
env:
- name: ANSIBLE_PARAMIKO_BANNER_TIMEOUT
# TODO:
#timeout=self._play_context.timeout,
"""
import os
import socket
import tempfile
import traceback
import fcntl
import re
from ansible.module_utils.compat.version import LooseVersion
from binascii import hexlify
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.module_utils.compat.paramiko import PARAMIKO_IMPORT_ERR, paramiko
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import makedirs_safe
from ansible.module_utils._text import to_bytes, to_native, to_text
display = Display()
AUTHENTICITY_MSG = """
paramiko: The authenticity of host '%s' can't be established.
The %s key fingerprint is %s.
Are you sure you want to continue connecting (yes/no)?
"""
# SSH Options Regex
SETTINGS_REGEX = re.compile(r'(\w+)(?:\s*=\s*|\s+)(.+)')
class MyAddPolicy(object):
"""
Based on AutoAddPolicy in paramiko so we can determine when keys are added
and also prompt for input.
Policy for automatically adding the hostname and new host key to the
local L{HostKeys} object, and saving it. This is used by L{SSHClient}.
"""
def __init__(self, connection):
self.connection = connection
self._options = connection._options
def missing_host_key(self, client, hostname, key):
if all((self._options['host_key_checking'], not self._options['host_key_auto_add'])):
fingerprint = hexlify(key.get_fingerprint())
ktype = key.get_name()
if self.connection.get_option('use_persistent_connections') or self.connection.force_persistence:
# don't print the prompt string since the user cannot respond
# to the question anyway
raise AnsibleError(AUTHENTICITY_MSG[1:92] % (hostname, ktype, fingerprint))
inp = to_text(
display.prompt_until(AUTHENTICITY_MSG % (hostname, ktype, fingerprint), private=False),
errors='surrogate_or_strict'
)
if inp not in ['yes', 'y', '']:
raise AnsibleError("host connection rejected by user")
key._added_by_ansible_this_time = True
# existing implementation below:
client._host_keys.add(hostname, key.get_name(), key)
# host keys are actually saved in close() function below
# in order to control ordering.
# keep connection objects on a per host basis to avoid repeated attempts to reconnect
SSH_CONNECTION_CACHE = {} # type: dict[str, paramiko.client.SSHClient]
SFTP_CONNECTION_CACHE = {} # type: dict[str, paramiko.sftp_client.SFTPClient]
class Connection(ConnectionBase):
''' SSH based connections with Paramiko '''
transport = 'paramiko'
_log_channel = None
def _cache_key(self):
return "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user)
def _connect(self):
cache_key = self._cache_key()
if cache_key in SSH_CONNECTION_CACHE:
self.ssh = SSH_CONNECTION_CACHE[cache_key]
else:
self.ssh = SSH_CONNECTION_CACHE[cache_key] = self._connect_uncached()
self._connected = True
return self
def _set_log_channel(self, name):
'''Mimic paramiko.SSHClient.set_log_channel'''
self._log_channel = name
def _parse_proxy_command(self, port=22):
proxy_command = None
# Parse ansible_ssh_common_args, specifically looking for ProxyCommand
ssh_args = [
self.get_option('ssh_extra_args'),
self.get_option('ssh_common_args'),
self.get_option('ssh_args', ''),
]
args = self._split_ssh_args(' '.join(ssh_args))
for i, arg in enumerate(args):
if arg.lower() == 'proxycommand':
# _split_ssh_args split ProxyCommand from the command itself
proxy_command = args[i + 1]
else:
# ProxyCommand and the command itself are a single string
match = SETTINGS_REGEX.match(arg)
if match:
if match.group(1).lower() == 'proxycommand':
proxy_command = match.group(2)
if proxy_command:
break
proxy_command = self.get_option('proxy_command') or proxy_command
sock_kwarg = {}
if proxy_command:
replacers = {
'%h': self._play_context.remote_addr,
'%p': port,
'%r': self._play_context.remote_user
}
for find, replace in replacers.items():
proxy_command = proxy_command.replace(find, str(replace))
try:
sock_kwarg = {'sock': paramiko.ProxyCommand(proxy_command)}
display.vvv("CONFIGURE PROXY COMMAND FOR CONNECTION: %s" % proxy_command, host=self._play_context.remote_addr)
except AttributeError:
display.warning('Paramiko ProxyCommand support unavailable. '
'Please upgrade to Paramiko 1.9.0 or newer. '
'Not using configured ProxyCommand')
return sock_kwarg
def _connect_uncached(self):
''' activates the connection object '''
if paramiko is None:
raise AnsibleError("paramiko is not installed: %s" % to_native(PARAMIKO_IMPORT_ERR))
port = self._play_context.port or 22
display.vvv("ESTABLISH PARAMIKO SSH CONNECTION FOR USER: %s on PORT %s TO %s" % (self._play_context.remote_user, port, self._play_context.remote_addr),
host=self._play_context.remote_addr)
ssh = paramiko.SSHClient()
# Set pubkey and hostkey algorithms to disable, the only manipulation allowed currently
# is keeping or omitting rsa-sha2 algorithms
paramiko_preferred_pubkeys = getattr(paramiko.Transport, '_preferred_pubkeys', ())
paramiko_preferred_hostkeys = getattr(paramiko.Transport, '_preferred_keys', ())
use_rsa_sha2_algorithms = self.get_option('use_rsa_sha2_algorithms')
disabled_algorithms = {}
if not use_rsa_sha2_algorithms:
if paramiko_preferred_pubkeys:
disabled_algorithms['pubkeys'] = tuple(a for a in paramiko_preferred_pubkeys if 'rsa-sha2' in a)
if paramiko_preferred_hostkeys:
disabled_algorithms['keys'] = tuple(a for a in paramiko_preferred_hostkeys if 'rsa-sha2' in a)
# override paramiko's default logger name
if self._log_channel is not None:
ssh.set_log_channel(self._log_channel)
self.keyfile = os.path.expanduser("~/.ssh/known_hosts")
if self.get_option('host_key_checking'):
for ssh_known_hosts in ("/etc/ssh/ssh_known_hosts", "/etc/openssh/ssh_known_hosts"):
try:
# TODO: check if we need to look at several possible locations, possible for loop
ssh.load_system_host_keys(ssh_known_hosts)
break
except IOError:
pass # file was not found, but not required to function
ssh.load_system_host_keys()
ssh_connect_kwargs = self._parse_proxy_command(port)
ssh.set_missing_host_key_policy(MyAddPolicy(self))
conn_password = self.get_option('password') or self._play_context.password
allow_agent = True
if conn_password is not None:
allow_agent = False
try:
key_filename = None
if self._play_context.private_key_file:
key_filename = os.path.expanduser(self._play_context.private_key_file)
# paramiko 2.2 introduced auth_timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('2.2.0'):
ssh_connect_kwargs['auth_timeout'] = self._play_context.timeout
# paramiko 1.15 introduced banner timeout parameter
if LooseVersion(paramiko.__version__) >= LooseVersion('1.15.0'):
ssh_connect_kwargs['banner_timeout'] = self.get_option('banner_timeout')
ssh.connect(
self._play_context.remote_addr.lower(),
username=self._play_context.remote_user,
allow_agent=allow_agent,
look_for_keys=self.get_option('look_for_keys'),
key_filename=key_filename,
password=conn_password,
timeout=self._play_context.timeout,
port=port,
disabled_algorithms=disabled_algorithms,
**ssh_connect_kwargs,
)
except paramiko.ssh_exception.BadHostKeyException as e:
raise AnsibleConnectionFailure('host key mismatch for %s' % e.hostname)
except paramiko.ssh_exception.AuthenticationException as e:
msg = 'Failed to authenticate: {0}'.format(to_text(e))
raise AnsibleAuthenticationFailure(msg)
except Exception as e:
msg = to_text(e)
if u"PID check failed" in msg:
raise AnsibleError("paramiko version issue, please upgrade paramiko on the machine running ansible")
elif u"Private key file is encrypted" in msg:
msg = 'ssh %s@%s:%s : %s\nTo connect as a different user, use -u <username>.' % (
self._play_context.remote_user, self._play_context.remote_addr, port, msg)
raise AnsibleConnectionFailure(msg)
else:
raise AnsibleConnectionFailure(msg)
return ssh
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
bufsize = 4096
try:
self.ssh.get_transport().set_keepalive(5)
chan = self.ssh.get_transport().open_session()
except Exception as e:
text_e = to_text(e)
msg = u"Failed to open session"
if text_e:
msg += u": %s" % text_e
raise AnsibleConnectionFailure(to_native(msg))
# sudo usually requires a PTY (cf. requiretty option), therefore
# we give it one by default (pty=True in ansible.cfg), and we try
# to initialise from the calling environment when sudoable is enabled
if self.get_option('pty') and sudoable:
chan.get_pty(term=os.getenv('TERM', 'vt100'), width=int(os.getenv('COLUMNS', 0)), height=int(os.getenv('LINES', 0)))
display.vvv("EXEC %s" % cmd, host=self._play_context.remote_addr)
cmd = to_bytes(cmd, errors='surrogate_or_strict')
no_prompt_out = b''
no_prompt_err = b''
become_output = b''
try:
chan.exec_command(cmd)
if self.become and self.become.expect_prompt():
passprompt = False
become_sucess = False
while not (become_sucess or passprompt):
display.debug('Waiting for Privilege Escalation input')
chunk = chan.recv(bufsize)
display.debug("chunk is: %s" % chunk)
if not chunk:
if b'unknown user' in become_output:
n_become_user = to_native(self.become.get_option('become_user',
playcontext=self._play_context))
raise AnsibleError('user %s does not exist' % n_become_user)
else:
break
# raise AnsibleError('ssh connection closed waiting for password prompt')
become_output += chunk
# need to check every line because we might get lectured
# and we might get the middle of a line in a chunk
for l in become_output.splitlines(True):
if self.become.check_success(l):
become_sucess = True
break
elif self.become.check_password_prompt(l):
passprompt = True
break
if passprompt:
if self.become:
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
chan.sendall(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
else:
raise AnsibleError("A password is required but none was supplied")
else:
no_prompt_out += become_output
no_prompt_err += become_output
except socket.timeout:
raise AnsibleError('ssh timed out waiting for privilege escalation.\n' + become_output)
stdout = b''.join(chan.makefile('rb', bufsize))
stderr = b''.join(chan.makefile_stderr('rb', bufsize))
return (chan.recv_exit_status(), no_prompt_out + stdout, no_prompt_out + stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
display.vvv("PUT %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
self.sftp = self.ssh.open_sftp()
except Exception as e:
raise AnsibleError("failed to open a SFTP connection (%s)" % e)
try:
self.sftp.put(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except IOError:
raise AnsibleError("failed to transfer file to %s" % out_path)
def _connect_sftp(self):
cache_key = "%s__%s__" % (self._play_context.remote_addr, self._play_context.remote_user)
if cache_key in SFTP_CONNECTION_CACHE:
return SFTP_CONNECTION_CACHE[cache_key]
else:
result = SFTP_CONNECTION_CACHE[cache_key] = self._connect().ssh.open_sftp()
return result
def fetch_file(self, in_path, out_path):
''' save a remote file to the specified path '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self._play_context.remote_addr)
try:
self.sftp = self._connect_sftp()
except Exception as e:
raise AnsibleError("failed to open a SFTP connection (%s)" % to_native(e))
try:
self.sftp.get(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except IOError:
raise AnsibleError("failed to transfer file from %s" % in_path)
def _any_keys_added(self):
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
return True
return False
def _save_ssh_host_keys(self, filename):
'''
not using the paramiko save_ssh_host_keys function as we want to add new SSH keys at the bottom so folks
don't complain about it :)
'''
if not self._any_keys_added():
return False
path = os.path.expanduser("~/.ssh")
makedirs_safe(path)
with open(filename, 'w') as f:
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
# was f.write
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if not added_this_time:
f.write("%s %s %s\n" % (hostname, keytype, key.get_base64()))
for hostname, keys in self.ssh._host_keys.items():
for keytype, key in keys.items():
added_this_time = getattr(key, '_added_by_ansible_this_time', False)
if added_this_time:
f.write("%s %s %s\n" % (hostname, keytype, key.get_base64()))
def reset(self):
if not self._connected:
return
self.close()
self._connect()
def close(self):
''' terminate the connection '''
cache_key = self._cache_key()
SSH_CONNECTION_CACHE.pop(cache_key, None)
SFTP_CONNECTION_CACHE.pop(cache_key, None)
if hasattr(self, 'sftp'):
if self.sftp is not None:
self.sftp.close()
if self.get_option('host_key_checking') and self.get_option('record_host_keys') and self._any_keys_added():
# add any new SSH host keys -- warning -- this could be slow
# (This doesn't acquire the connection lock because it needs
# to exclude only other known_hosts writers, not connections
# that are starting up.)
lockfile = self.keyfile.replace("known_hosts", ".known_hosts.lock")
dirname = os.path.dirname(self.keyfile)
makedirs_safe(dirname)
KEY_LOCK = open(lockfile, 'w')
fcntl.lockf(KEY_LOCK, fcntl.LOCK_EX)
try:
# just in case any were added recently
self.ssh.load_system_host_keys()
self.ssh._host_keys.update(self.ssh._system_host_keys)
# gather information about the current key file, so
# we can ensure the new file has the correct mode/owner
key_dir = os.path.dirname(self.keyfile)
if os.path.exists(self.keyfile):
key_stat = os.stat(self.keyfile)
mode = key_stat.st_mode
uid = key_stat.st_uid
gid = key_stat.st_gid
else:
mode = 33188
uid = os.getuid()
gid = os.getgid()
# Save the new keys to a temporary file and move it into place
# rather than rewriting the file. We set delete=False because
# the file will be moved into place rather than cleaned up.
tmp_keyfile = tempfile.NamedTemporaryFile(dir=key_dir, delete=False)
os.chmod(tmp_keyfile.name, mode & 0o7777)
os.chown(tmp_keyfile.name, uid, gid)
self._save_ssh_host_keys(tmp_keyfile.name)
tmp_keyfile.close()
os.rename(tmp_keyfile.name, self.keyfile)
except Exception:
# unable to save keys, including scenario when key was invalid
# and caught earlier
traceback.print_exc()
fcntl.lockf(KEY_LOCK, fcntl.LOCK_UN)
self.ssh.close()
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,509 |
Jinja expressions are not evaluated in credential variables if paramiko plugin is used
|
### Summary
I use paramiko ssh plugin. I try to set `ansible_ssh_user`, `ansible_ssh_pass`, etc. from jinja expression. This has worked correctly in version 2.10, jinja expressions were evaluated. But in ansible-core 2.13 jinja expressions are not evaluated:
`ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx`
### Issue Type
Bug Report
### Component Name
ansible-playbook,paramiko
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/zkrakko/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/zkrakko/ansible-venv/lib/python3.8/site-packages/ansible
ansible collection location = /home/zkrakko/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zkrakko/ansible-venv/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux Mint 20.3
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
test.yaml file:
```yaml (paste below)
---
- hosts: server
gather_facts: no
vars:
ansible_ssh_user: "{{ some_variable | default('cirros') }}"
ansible_ss_pass: gocubsgo
tasks:
- raw: echo "{{ ansible_ssh_user }}"
```
hosts file:
```
[server]
10.xx.xx.xx
```
command:
`ansible-playbook -vvv -i hosts -c paramiko test.yaml`
### Expected Results
I expected that `cirros` username is used for connection (like in ansible-core 2.10):
```
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: cirros on PORT 22 TO 10.xx.xx.xx
```
### Actual Results
```console
Instead, the jinja expression in username was not evaluated:
PLAYBOOK: test.yaml ****************************************************************
1 plays in test.yaml
PLAY [server] **********************************************************************
META: ran handlers
TASK [raw] *************************************************************************
task path: /home/zkrakko/ansible-venv/test.yaml:8
<10.xx.xx.xx> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: {{ some_variable | default('cirros') }} on PORT 22 TO 10.xx.xx.xx
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78509
|
https://github.com/ansible/ansible/pull/79704
|
694f12d01b17e4aba50bda55546edada6e79b5a8
|
a1bff416edf9b9c8bd5c3b002277eed5b5323953
| 2022-08-11T07:21:52Z |
python
| 2023-03-07T16:09:14Z |
test/integration/targets/connection_paramiko_ssh/test_connection.inventory
|
[paramiko_ssh]
paramiko_ssh-pipelining ansible_ssh_pipelining=true
paramiko_ssh-no-pipelining ansible_ssh_pipelining=false
[paramiko_ssh:vars]
ansible_host=localhost
ansible_connection=paramiko_ssh
ansible_python_interpreter="{{ ansible_playbook_python }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,611 |
ansible check mode - module copy creates dest directory if it ends with a / and remote_src is true
|
### Summary
In check mode, when I use ansible builtin module **ansible.builtin.copy** to copy a remote file or directory to a destination which ends with a `/`, it creates destination directory on remote host despite check mode is enabled.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_BECOME_METHOD(/home/user/.ansible.cfg) = sudo
DEFAULT_ROLES_PATH(/home/user/.ansible.cfg) = ['/home/user/data/git/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/user/.ansible.cfg) = True
```
### OS / Environment
Ansible controller OS is Debian bullseye.
### Steps to Reproduce
On ansible controller :
1. Create directory **/tmp/ansible-test**
`$ mkdir /tmp/ansible-test`
2. Create empty file **/tmp/ansible-test/empty.txt**
`$ touch /tmp/ansible-test/empty.txt`
3. Create playbook **/tmp/ansible-test/playbook.yml** with the following content
```yaml
---
- hosts: localhost
tasks:
- name: copy file empty.txt into /tmp/ansible-test/not-existing-subdir/
ansible.builtin.copy:
remote_src: yes
src: /tmp/ansible-test/empty.txt
dest: /tmp/ansible-test/not-existing-subdir/
```
4. Run playbook
`$ ansible-playbook --check /tmp/ansible-test/playbook.yml`
### Expected Results
Directory **/tmp/ansible-test/not-existing-subdir/** is not created after having run playbook (cf. step 3 in section **Steps to Reproduce**) in check mode
### Actual Results
```console
# Directory **/tmp/ansible-test/not-existing-subdir/** has been created by running playbook (cf. step 3 in section **Steps to Reproduce**) in check mode (cf. execution output below)
user@debian:/tmp/ansible-test$ ls -Al
total 4
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
user@debian:/tmp/ansible-test$ ansible-playbook --check --diff /tmp/ansible-test/playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [copy file empty.txt into /tmp/ansible-test/not-existing-subdir/] ********************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
user@debian:/tmp/ansible-test$ ls -Al
total 8
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
drwxr-xr-x 2 user user 4096 22 aoΓ»t 15:19 not-existing-subdir
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78611
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-08-22T13:28:32Z |
python
| 2023-03-08T20:40:01Z |
changelogs/fragments/78624-copy-remote-src-check-mode.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,611 |
ansible check mode - module copy creates dest directory if it ends with a / and remote_src is true
|
### Summary
In check mode, when I use ansible builtin module **ansible.builtin.copy** to copy a remote file or directory to a destination which ends with a `/`, it creates destination directory on remote host despite check mode is enabled.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_BECOME_METHOD(/home/user/.ansible.cfg) = sudo
DEFAULT_ROLES_PATH(/home/user/.ansible.cfg) = ['/home/user/data/git/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/user/.ansible.cfg) = True
```
### OS / Environment
Ansible controller OS is Debian bullseye.
### Steps to Reproduce
On ansible controller :
1. Create directory **/tmp/ansible-test**
`$ mkdir /tmp/ansible-test`
2. Create empty file **/tmp/ansible-test/empty.txt**
`$ touch /tmp/ansible-test/empty.txt`
3. Create playbook **/tmp/ansible-test/playbook.yml** with the following content
```yaml
---
- hosts: localhost
tasks:
- name: copy file empty.txt into /tmp/ansible-test/not-existing-subdir/
ansible.builtin.copy:
remote_src: yes
src: /tmp/ansible-test/empty.txt
dest: /tmp/ansible-test/not-existing-subdir/
```
4. Run playbook
`$ ansible-playbook --check /tmp/ansible-test/playbook.yml`
### Expected Results
Directory **/tmp/ansible-test/not-existing-subdir/** is not created after having run playbook (cf. step 3 in section **Steps to Reproduce**) in check mode
### Actual Results
```console
# Directory **/tmp/ansible-test/not-existing-subdir/** has been created by running playbook (cf. step 3 in section **Steps to Reproduce**) in check mode (cf. execution output below)
user@debian:/tmp/ansible-test$ ls -Al
total 4
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
user@debian:/tmp/ansible-test$ ansible-playbook --check --diff /tmp/ansible-test/playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [copy file empty.txt into /tmp/ansible-test/not-existing-subdir/] ********************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
user@debian:/tmp/ansible-test$ ls -Al
total 8
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
drwxr-xr-x 2 user user 4096 22 aoΓ»t 15:19 not-existing-subdir
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78611
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-08-22T13:28:32Z |
python
| 2023-03-08T20:40:01Z |
lib/ansible/modules/copy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The C(copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(ansible.builtin.fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(ansible.builtin.template) module.
Using a variable in the C(content) field will result in unpredictable output.
- For Windows targets, use the M(ansible.windows.win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of C(src), sets the contents of a file directly to the specified value.
- Works only when C(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if C(content) contains a variable, use the
M(ansible.builtin.template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If C(src) is a directory, this must be a directory too.
- If C(dest) is a non-existent path and if either C(dest) ends with "/" or C(src) is a directory, C(dest) is created.
- If I(dest) is a relative path, the starting directory is determined by the remote host.
- If C(src) and C(dest) are files, the parent directory of C(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If C(true), the remote file will be replaced when contents are different than the source.
- If C(false), the file will only be transferred if the destination does not exist.
type: bool
default: yes
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string C(preserve).
- C(preserve) means that the file will be given the same permissions as the source file.
- When doing a recursive copy, see also C(directory_mode).
- If C(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If C(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying C(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
directory_mode:
description:
- When doing a recursive copy set the mode for the directories.
- If this is not set we will use the system defaults.
- The mode is only set on directories which are newly created, and will not affect those that already existed.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether C(src) needs to be transferred or already is present remotely.
- If C(false), it will search for C(src) on the controller node.
- If C(true) it will search for C(src) on the managed (remote) node.
- C(remote_src) supports recursive copying as of version 2.8.
- C(remote_src) only works with C(mode=preserve) as of version 2.6.
- Autodecryption of files does not work when C(remote_src=yes).
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.flow
notes:
- The M(ansible.builtin.copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.fetch
- module: ansible.builtin.file
- module: ansible.builtin.template
- module: ansible.posix.synchronize
- module: ansible.windows.win_copy
author:
- Ansible Core Team
- Michael DeHaan
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
safe_file_operations:
support: full
vault:
support: full
version_added: '2.2'
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
ansible.builtin.copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
ansible.builtin.copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
ansible.builtin.copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy.
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created.
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution.
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution.
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution.
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution.
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution.
returned: success
type: str
sample: "0644"
size:
description: Size of the target, after execution.
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution.
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
"""Copy files that are different between `src` directory and `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
"""Copy files that exist in `src` directory only to the `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module) or changed
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
follow=dict(type='bool', default=False),
),
add_file_common_args=True,
supports_check_mode=True,
)
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
changed = False
checksum_dest = None
checksum_src = None
md5sum_src = None
if os.path.isfile(src):
try:
checksum_src = module.sha1(src)
except (OSError, IOError) as e:
module.warn("Unable to calculate src checksum, assuming change: %s" % to_native(e))
try:
# Backwards compat only. This will be None in FIPS mode
md5sum_src = module.md5(src)
except ValueError:
pass
elif remote_src and not os.path.isdir(src):
module.fail_json("Cannot copy invalid source '%s': not a file" % to_native(src))
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if dest.endswith(os.sep):
if _original_basename:
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
os.makedirs(b_dirname)
changed = True
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
_, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
# at this point we should always have tmp file
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
# If neither have checksums, both src and dest are directories.
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
changed = True
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
if not module.check_mode:
file_args = module.load_file_common_arguments(module.params, path=dest)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,611 |
ansible check mode - module copy creates dest directory if it ends with a / and remote_src is true
|
### Summary
In check mode, when I use ansible builtin module **ansible.builtin.copy** to copy a remote file or directory to a destination which ends with a `/`, it creates destination directory on remote host despite check mode is enabled.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/user/.ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_BECOME_METHOD(/home/user/.ansible.cfg) = sudo
DEFAULT_ROLES_PATH(/home/user/.ansible.cfg) = ['/home/user/data/git/ansible/roles']
DEFAULT_SCP_IF_SSH(/home/user/.ansible.cfg) = True
```
### OS / Environment
Ansible controller OS is Debian bullseye.
### Steps to Reproduce
On ansible controller :
1. Create directory **/tmp/ansible-test**
`$ mkdir /tmp/ansible-test`
2. Create empty file **/tmp/ansible-test/empty.txt**
`$ touch /tmp/ansible-test/empty.txt`
3. Create playbook **/tmp/ansible-test/playbook.yml** with the following content
```yaml
---
- hosts: localhost
tasks:
- name: copy file empty.txt into /tmp/ansible-test/not-existing-subdir/
ansible.builtin.copy:
remote_src: yes
src: /tmp/ansible-test/empty.txt
dest: /tmp/ansible-test/not-existing-subdir/
```
4. Run playbook
`$ ansible-playbook --check /tmp/ansible-test/playbook.yml`
### Expected Results
Directory **/tmp/ansible-test/not-existing-subdir/** is not created after having run playbook (cf. step 3 in section **Steps to Reproduce**) in check mode
### Actual Results
```console
# Directory **/tmp/ansible-test/not-existing-subdir/** has been created by running playbook (cf. step 3 in section **Steps to Reproduce**) in check mode (cf. execution output below)
user@debian:/tmp/ansible-test$ ls -Al
total 4
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
user@debian:/tmp/ansible-test$ ansible-playbook --check --diff /tmp/ansible-test/playbook.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [copy file empty.txt into /tmp/ansible-test/not-existing-subdir/] ********************************************************************************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
user@debian:/tmp/ansible-test$ ls -Al
total 8
-rw-r--r-- 1 user user 0 22 aoΓ»t 14:29 empty.txt
drwxr-xr-x 2 user user 4096 22 aoΓ»t 15:19 not-existing-subdir
-rw-r--r-- 1 user user 239 22 aoΓ»t 15:16 playbook.yml
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78611
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-08-22T13:28:32Z |
python
| 2023-03-08T20:40:01Z |
test/integration/targets/copy/tasks/check_mode.yml
|
- block:
- name: check_mode - Create another clean copy of 'subdir' not messed with by previous tests (check_mode)
copy:
src: subdir
dest: 'checkmode_subdir/'
directory_mode: 0700
local_follow: False
check_mode: true
register: check_mode_subdir_first
- name: check_mode - Stat the new dir to make sure it really doesn't exist
stat:
path: 'checkmode_subdir/'
register: check_mode_subdir_first_stat
- name: check_mode - Actually do it
copy:
src: subdir
dest: 'checkmode_subdir/'
directory_mode: 0700
local_follow: False
register: check_mode_subdir_real
- name: check_mode - Stat the new dir to make sure it really exists
stat:
path: 'checkmode_subdir/'
register: check_mode_subdir_real_stat
# Quick sanity before we move on
- assert:
that:
- check_mode_subdir_first is changed
- not check_mode_subdir_first_stat.stat.exists
- check_mode_subdir_real is changed
- check_mode_subdir_real_stat.stat.exists
# Do some finagling here. First, use check_mode to ensure it never gets
# created. Then actualy create it, and use check_mode to ensure that doing
# the same copy gets marked as no change.
#
# This same pattern repeats for several other src/dest combinations.
- name: check_mode - Ensure dest with trailing / never gets created but would be without check_mode
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
follow: true
check_mode: true
register: check_mode_trailing_slash_first
- name: check_mode - Stat the new dir to make sure it really doesn't exist
stat:
path: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_first_stat
- name: check_mode - Create the above copy for real now (without check_mode)
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_real
- name: check_mode - Stat the new dir to make sure it really exists
stat:
path: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_real_stat
- name: check_mode - Do the same copy yet again (with check_mode this time) to ensure it's marked unchanged
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
check_mode: true
register: check_mode_trailing_slash_second
# Repeat the same basic pattern here.
- name: check_mode - Do another basic copy (with check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
check_mode: true
register: check_mode_foo_first
- name: check_mode - Stat the new file to make sure it really doesn't exist
stat:
path: "{{ remote_dir }}/foo-check_mode.txt"
register: check_mode_foo_first_stat
- name: check_mode - Do the same basic copy (without check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
register: check_mode_foo_real
- name: check_mode - Stat the new file to make sure it really exists
stat:
path: "{{ remote_dir }}/foo-check_mode.txt"
register: check_mode_foo_real_stat
- name: check_mode - And again (with check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
register: check_mode_foo_second
- assert:
that:
- check_mode_subdir_first is changed
- check_mode_trailing_slash_first is changed
# TODO: This is a legitimate bug
#- not check_mode_trailing_slash_first_stat.stat.exists
- check_mode_trailing_slash_real is changed
- check_mode_trailing_slash_real_stat.stat.exists
- check_mode_trailing_slash_second is not changed
- check_mode_foo_first is changed
- not check_mode_foo_first_stat.stat.exists
- check_mode_foo_real is changed
- check_mode_foo_real_stat.stat.exists
- check_mode_foo_second is not changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,957 |
copy module does not reflect 'changed' for mode in check mode with remote_src: yes
|
### Summary
I use the `ansible.builtin.copy` module. Unfortunately, there is a combination of parameters where check mode prints `ok`, although when running in run mode, there is an actual change.
Unexpected malfunction is reproducible tested on `2.9.27`, `2.12.6` and `2.13.0`
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/lib/python3.8/site-packages/ansible
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
$ ansible --version
ansible [core 2.12.6]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.pyenv/versions/3.8.12/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.pyenv/versions/3.8.12/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
$ ansible --version
ansible [core 2.13.0]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
```yaml
---
- hosts: localhost
tasks:
- name: create file
file:
path: /tmp/ansible_foo
state: touch
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
- name: Copy file with permissions
copy:
src: /tmp/ansible_foo
dest: /tmp/ansible_foo2
mode: 0644
remote_src: yes
- name: create file
file:
path: /tmp/ansible_foo2
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
```
### Expected Results
I expect task `Copy file with permissions` to print `changed`
Expected Result:
```
TASK [Copy file with permissions] ********************************************************************************************************************
--- before
+++ after
@@ -1,4 +1,4 @@
{
- "mode": "0600",
+ "mode": "0644",
"path": "/tmp/ansible_foo2"
}
```
It prints the expected `changed` as soon as `remote_src: yes` is removed.
### Actual Results
```console
TASK [Copy file with permissions] ********************************************************************************************************************
ok: [localhost]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77957
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-06-02T15:25:14Z |
python
| 2023-03-08T20:40:01Z |
changelogs/fragments/78624-copy-remote-src-check-mode.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,957 |
copy module does not reflect 'changed' for mode in check mode with remote_src: yes
|
### Summary
I use the `ansible.builtin.copy` module. Unfortunately, there is a combination of parameters where check mode prints `ok`, although when running in run mode, there is an actual change.
Unexpected malfunction is reproducible tested on `2.9.27`, `2.12.6` and `2.13.0`
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/lib/python3.8/site-packages/ansible
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
$ ansible --version
ansible [core 2.12.6]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.pyenv/versions/3.8.12/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.pyenv/versions/3.8.12/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
$ ansible --version
ansible [core 2.13.0]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
```yaml
---
- hosts: localhost
tasks:
- name: create file
file:
path: /tmp/ansible_foo
state: touch
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
- name: Copy file with permissions
copy:
src: /tmp/ansible_foo
dest: /tmp/ansible_foo2
mode: 0644
remote_src: yes
- name: create file
file:
path: /tmp/ansible_foo2
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
```
### Expected Results
I expect task `Copy file with permissions` to print `changed`
Expected Result:
```
TASK [Copy file with permissions] ********************************************************************************************************************
--- before
+++ after
@@ -1,4 +1,4 @@
{
- "mode": "0600",
+ "mode": "0644",
"path": "/tmp/ansible_foo2"
}
```
It prints the expected `changed` as soon as `remote_src: yes` is removed.
### Actual Results
```console
TASK [Copy file with permissions] ********************************************************************************************************************
ok: [localhost]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77957
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-06-02T15:25:14Z |
python
| 2023-03-08T20:40:01Z |
lib/ansible/modules/copy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The C(copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(ansible.builtin.fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(ansible.builtin.template) module.
Using a variable in the C(content) field will result in unpredictable output.
- For Windows targets, use the M(ansible.windows.win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of C(src), sets the contents of a file directly to the specified value.
- Works only when C(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if C(content) contains a variable, use the
M(ansible.builtin.template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If C(src) is a directory, this must be a directory too.
- If C(dest) is a non-existent path and if either C(dest) ends with "/" or C(src) is a directory, C(dest) is created.
- If I(dest) is a relative path, the starting directory is determined by the remote host.
- If C(src) and C(dest) are files, the parent directory of C(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If C(true), the remote file will be replaced when contents are different than the source.
- If C(false), the file will only be transferred if the destination does not exist.
type: bool
default: yes
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string C(preserve).
- C(preserve) means that the file will be given the same permissions as the source file.
- When doing a recursive copy, see also C(directory_mode).
- If C(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If C(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying C(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
directory_mode:
description:
- When doing a recursive copy set the mode for the directories.
- If this is not set we will use the system defaults.
- The mode is only set on directories which are newly created, and will not affect those that already existed.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether C(src) needs to be transferred or already is present remotely.
- If C(false), it will search for C(src) on the controller node.
- If C(true) it will search for C(src) on the managed (remote) node.
- C(remote_src) supports recursive copying as of version 2.8.
- C(remote_src) only works with C(mode=preserve) as of version 2.6.
- Autodecryption of files does not work when C(remote_src=yes).
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.flow
notes:
- The M(ansible.builtin.copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.fetch
- module: ansible.builtin.file
- module: ansible.builtin.template
- module: ansible.posix.synchronize
- module: ansible.windows.win_copy
author:
- Ansible Core Team
- Michael DeHaan
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
safe_file_operations:
support: full
vault:
support: full
version_added: '2.2'
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
ansible.builtin.copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
ansible.builtin.copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
ansible.builtin.copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy.
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created.
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution.
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution.
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution.
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution.
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution.
returned: success
type: str
sample: "0644"
size:
description: Size of the target, after execution.
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution.
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
"""Copy files that are different between `src` directory and `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
"""Copy files that exist in `src` directory only to the `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module) or changed
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
follow=dict(type='bool', default=False),
),
add_file_common_args=True,
supports_check_mode=True,
)
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
changed = False
checksum_dest = None
checksum_src = None
md5sum_src = None
if os.path.isfile(src):
try:
checksum_src = module.sha1(src)
except (OSError, IOError) as e:
module.warn("Unable to calculate src checksum, assuming change: %s" % to_native(e))
try:
# Backwards compat only. This will be None in FIPS mode
md5sum_src = module.md5(src)
except ValueError:
pass
elif remote_src and not os.path.isdir(src):
module.fail_json("Cannot copy invalid source '%s': not a file" % to_native(src))
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if dest.endswith(os.sep):
if _original_basename:
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
os.makedirs(b_dirname)
changed = True
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
_, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
# at this point we should always have tmp file
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
# If neither have checksums, both src and dest are directories.
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
changed = True
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
if not module.check_mode:
file_args = module.load_file_common_arguments(module.params, path=dest)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,957 |
copy module does not reflect 'changed' for mode in check mode with remote_src: yes
|
### Summary
I use the `ansible.builtin.copy` module. Unfortunately, there is a combination of parameters where check mode prints `ok`, although when running in run mode, there is an actual change.
Unexpected malfunction is reproducible tested on `2.9.27`, `2.12.6` and `2.13.0`
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/lib/python3.8/site-packages/ansible
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.9-TCZdLugh/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
$ ansible --version
ansible [core 2.12.6]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.pyenv/versions/3.8.12/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.pyenv/versions/3.8.12/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
$ ansible --version
ansible [core 2.13.0]
config file = None
configured module search path = ['/home/phoffmann/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/lib/python3.8/site-packages/ansible
ansible collection location = /home/phoffmann/.ansible/collections:/usr/share/ansible/collections
executable location = /home/phoffmann/.local/share/virtualenvs/ansible-2.13-v8S06Uvz/bin/ansible
python version = 3.8.12 (default, Apr 8 2022, 11:41:59) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Ubuntu 20.04
### Steps to Reproduce
```yaml
---
- hosts: localhost
tasks:
- name: create file
file:
path: /tmp/ansible_foo
state: touch
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
- name: Copy file with permissions
copy:
src: /tmp/ansible_foo
dest: /tmp/ansible_foo2
mode: 0644
remote_src: yes
- name: create file
file:
path: /tmp/ansible_foo2
owner: '{{ ansible_env.USER }}'
group: '{{ ansible_env.USER }}'
mode: 0600
```
### Expected Results
I expect task `Copy file with permissions` to print `changed`
Expected Result:
```
TASK [Copy file with permissions] ********************************************************************************************************************
--- before
+++ after
@@ -1,4 +1,4 @@
{
- "mode": "0600",
+ "mode": "0644",
"path": "/tmp/ansible_foo2"
}
```
It prints the expected `changed` as soon as `remote_src: yes` is removed.
### Actual Results
```console
TASK [Copy file with permissions] ********************************************************************************************************************
ok: [localhost]
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77957
|
https://github.com/ansible/ansible/pull/78624
|
c564c6e21e4538b475df2ae4b3f66b73decff160
|
b7a0e0d79278906c57c6dfc637d0e0b09b45db34
| 2022-06-02T15:25:14Z |
python
| 2023-03-08T20:40:01Z |
test/integration/targets/copy/tasks/check_mode.yml
|
- block:
- name: check_mode - Create another clean copy of 'subdir' not messed with by previous tests (check_mode)
copy:
src: subdir
dest: 'checkmode_subdir/'
directory_mode: 0700
local_follow: False
check_mode: true
register: check_mode_subdir_first
- name: check_mode - Stat the new dir to make sure it really doesn't exist
stat:
path: 'checkmode_subdir/'
register: check_mode_subdir_first_stat
- name: check_mode - Actually do it
copy:
src: subdir
dest: 'checkmode_subdir/'
directory_mode: 0700
local_follow: False
register: check_mode_subdir_real
- name: check_mode - Stat the new dir to make sure it really exists
stat:
path: 'checkmode_subdir/'
register: check_mode_subdir_real_stat
# Quick sanity before we move on
- assert:
that:
- check_mode_subdir_first is changed
- not check_mode_subdir_first_stat.stat.exists
- check_mode_subdir_real is changed
- check_mode_subdir_real_stat.stat.exists
# Do some finagling here. First, use check_mode to ensure it never gets
# created. Then actualy create it, and use check_mode to ensure that doing
# the same copy gets marked as no change.
#
# This same pattern repeats for several other src/dest combinations.
- name: check_mode - Ensure dest with trailing / never gets created but would be without check_mode
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
follow: true
check_mode: true
register: check_mode_trailing_slash_first
- name: check_mode - Stat the new dir to make sure it really doesn't exist
stat:
path: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_first_stat
- name: check_mode - Create the above copy for real now (without check_mode)
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_real
- name: check_mode - Stat the new dir to make sure it really exists
stat:
path: 'destdir_should_never_exist_because_of_check_mode/'
register: check_mode_trailing_slash_real_stat
- name: check_mode - Do the same copy yet again (with check_mode this time) to ensure it's marked unchanged
copy:
remote_src: true
src: 'checkmode_subdir/'
dest: 'destdir_should_never_exist_because_of_check_mode/'
check_mode: true
register: check_mode_trailing_slash_second
# Repeat the same basic pattern here.
- name: check_mode - Do another basic copy (with check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
check_mode: true
register: check_mode_foo_first
- name: check_mode - Stat the new file to make sure it really doesn't exist
stat:
path: "{{ remote_dir }}/foo-check_mode.txt"
register: check_mode_foo_first_stat
- name: check_mode - Do the same basic copy (without check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
register: check_mode_foo_real
- name: check_mode - Stat the new file to make sure it really exists
stat:
path: "{{ remote_dir }}/foo-check_mode.txt"
register: check_mode_foo_real_stat
- name: check_mode - And again (with check_mode)
copy:
src: foo.txt
dest: "{{ remote_dir }}/foo-check_mode.txt"
mode: 0444
register: check_mode_foo_second
- assert:
that:
- check_mode_subdir_first is changed
- check_mode_trailing_slash_first is changed
# TODO: This is a legitimate bug
#- not check_mode_trailing_slash_first_stat.stat.exists
- check_mode_trailing_slash_real is changed
- check_mode_trailing_slash_real_stat.stat.exists
- check_mode_trailing_slash_second is not changed
- check_mode_foo_first is changed
- not check_mode_foo_first_stat.stat.exists
- check_mode_foo_real is changed
- check_mode_foo_real_stat.stat.exists
- check_mode_foo_second is not changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,807 |
Cleaning up Old Links and References in Ansible Documentation
|
### Summary
Dear Team,
I would like to start some cleaning up on Ansible Documentation (https://docs.ansible.com/) where we have so many links pointing to unmaintained versions/pages of documentations.
Eg:
Page: https://docs.ansible.com/ansible/latest/plugins/connection.html
"paramiko SSH " link is pointing to an unmaintained version link (https://docs.ansible.com/ansible/2.8/plugins/connection/paramiko_ssh.html#paramiko-ssh-connection), instead of the latest collection information.
1. Could anyone guide me that if this is okay to add entry there with latest link ?
2. Could you share some guideline for adding link to different page instead of simple :ref:`paramiko SSH<paramiko_ssh_connection>`
Thank you.
### Issue Type
Documentation Report
### Component Name
ansible/docs/docsite/rst/plugins/connection.rst
### Ansible Version
```console
$ ansible --version
2.9
```
### Configuration
```console
$ ansible-config dump --only-changed
NA
```
### OS / Environment
All OS
### Additional Information
It would be great to link to the latest doc even from old pages. Otherwise, users may refer click and browse through old page references.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74807
|
https://github.com/ansible/ansible/pull/80194
|
507fd1bd60f529681c96fa526ffa7835e122ddcb
|
0937cc486219663b6b6e6a178ef40798217864fa
| 2021-05-24T09:55:28Z |
python
| 2023-03-16T20:13:01Z |
docs/docsite/rst/plugins/connection.rst
|
.. _connection_plugins:
Connection plugins
==================
.. contents::
:local:
:depth: 2
Connection plugins allow Ansible to connect to the target hosts so it can execute tasks on them. Ansible ships with many connection plugins, but only one can be used per host at a time.
By default, Ansible ships with several connection plugins. The most commonly used are the :ref:`paramiko SSH<paramiko_ssh_connection>`, native ssh (just called :ref:`ssh<ssh_connection>`), and :ref:`local<local_connection>` connection types. All of these can be used in playbooks and with :command:`/usr/bin/ansible` to decide how you want to talk to remote machines. If necessary, you can :ref:`create custom connection plugins <developing_connection_plugins>`.
The basics of these connection types are covered in the :ref:`getting started<intro_getting_started>` section.
.. _ssh_plugins:
``ssh`` plugins
---------------
Because ssh is the default protocol used in system administration and the protocol most used in Ansible, ssh options are included in the command line tools. See :ref:`ansible-playbook` for more details.
.. _enabling_connection:
Adding connection plugins
-------------------------
You can extend Ansible to support other transports (such as SNMP or message bus) by dropping a custom plugin
into the ``connection_plugins`` directory.
.. _using_connection:
Using connection plugins
------------------------
You can set the connection plugin globally via :ref:`configuration<ansible_configuration_settings>`, at the command line (``-c``, ``--connection``), as a :ref:`keyword <playbook_keywords>` in your play, or by setting a :ref:`variable<behavioral_parameters>`, most often in your inventory.
For example, for Windows machines you might want to set the :ref:`winrm <winrm_connection>` plugin as an inventory variable.
Most connection plugins can operate with minimal configuration. By default they use the :ref:`inventory hostname<inventory_hostnames_lookup>` and defaults to find the target host.
Plugins are self-documenting. Each plugin should document its configuration options. The following are connection variables common to most connection plugins:
:ref:`ansible_host<magic_variables_and_hostvars>`
The name of the host to connect to, if different from the :ref:`inventory <intro_inventory>` hostname.
:ref:`ansible_port<faq_setting_users_and_ports>`
The ssh port number, for :ref:`ssh <ssh_connection>` and :ref:`paramiko_ssh <paramiko_ssh_connection>` it defaults to 22.
:ref:`ansible_user<faq_setting_users_and_ports>`
The default user name to use for log in. Most plugins default to the 'current user running Ansible'.
Each plugin might also have a specific version of a variable that overrides the general version. For example, ``ansible_ssh_host`` for the :ref:`ssh <ssh_connection>` plugin.
.. _connection_plugin_list:
Plugin list
-----------
You can use ``ansible-doc -t connection -l`` to see the list of available plugins.
Use ``ansible-doc -t connection <plugin name>`` to see detailed documentation and examples.
.. seealso::
:ref:`Working with Playbooks<working_with_playbooks>`
An introduction to playbooks
:ref:`callback_plugins`
Callback plugins
:ref:`filter_plugins`
Filter plugins
:ref:`test_plugins`
Test plugins
:ref:`lookup_plugins`
Lookup plugins
:ref:`vars_plugins`
Vars plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,209 |
Allow linking to a specific variable on the "Special Variables" page
|
### Summary
On the _[Special Variables](https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html)_ page, it is currently not possible to link to a specific variable.
This feature is, however, available on the _[Glossary](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html)_ page, via the use of a `.. glossary::` block.
This generate direct links to terms, e.g.: https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html#term-Idempotency
This issue is to propose the same usage, in order to have links like https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#term-group_names
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/special_variables.rst
### Ansible Version
```console
Latest version of the documentation
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80209
|
https://github.com/ansible/ansible/pull/80210
|
fc8203168e964b26478a0f28b0e34d9b34331fde
|
1491ec8019b064374145dace41b1320e04fb494b
| 2023-03-14T11:39:56Z |
python
| 2023-03-17T13:55:04Z |
docs/docsite/rst/reference_appendices/special_variables.rst
|
.. _special_variables:
Special Variables
=================
Magic variables
---------------
These variables cannot be set directly by the user; Ansible will always override them to reflect internal state.
ansible_check_mode
Boolean that indicates if we are in check mode or not
ansible_config_file
The full path of used Ansible configuration file
ansible_dependent_role_names
The names of the roles currently imported into the current play as dependencies of other plays
ansible_diff_mode
Boolean that indicates if we are in diff mode or not
ansible_forks
Integer reflecting the number of maximum forks available to this run
ansible_inventory_sources
List of sources used as inventory
ansible_limit
Contents of the ``--limit`` CLI option for the current execution of Ansible
ansible_loop
A dictionary/map containing extended loop information when enabled through ``loop_control.extended``
ansible_loop_var
The name of the value provided to ``loop_control.loop_var``. Added in ``2.8``
ansible_index_var
The name of the value provided to ``loop_control.index_var``. Added in ``2.9``
ansible_parent_role_names
When the current role is being executed by means of an :ref:`include_role <include_role_module>` or :ref:`import_role <import_role_module>` action, this variable contains a list of all parent roles, with the most recent role (in other words, the role that included/imported this role) being the first item in the list.
When multiple inclusions occur, this list lists the *last* role (in other words, the role that included this role) as the *first* item in the list. It is also possible that a specific role exists more than once in this list.
For example: When role **A** includes role **B**, inside role B, ``ansible_parent_role_names`` will equal to ``['A']``. If role **B** then includes role **C**, the list becomes ``['B', 'A']``.
ansible_parent_role_paths
When the current role is being executed by means of an :ref:`include_role <include_role_module>` or :ref:`import_role <import_role_module>` action, this variable contains a list of all parent roles paths, with the most recent role (in other words, the role that included/imported this role) being the first item in the list.
Please refer to ``ansible_parent_role_names`` for the order of items in this list.
ansible_play_batch
List of active hosts in the current play run limited by the serial, aka 'batch'. Failed/Unreachable hosts are not considered 'active'.
ansible_play_hosts
List of hosts in the current play run, not limited by the serial. Failed/Unreachable hosts are excluded from this list.
ansible_play_hosts_all
List of all the hosts that were targeted by the play
ansible_play_role_names
The names of the roles currently imported into the current play. This list does **not** contain the role names that are
implicitly included through dependencies.
ansible_playbook_python
The path to the python interpreter being used by Ansible on the controller
ansible_role_names
The names of the roles currently imported into the current play, or roles referenced as dependencies of the roles
imported into the current play.
ansible_role_name
The fully qualified collection role name, in the format of ``namespace.collection.role_name``
ansible_collection_name
The name of the collection the task that is executing is a part of. In the format of ``namespace.collection``
ansible_run_tags
Contents of the ``--tags`` CLI option, which specifies which tags will be included for the current run. Note that if ``--tags`` is not passed, this variable will default to ``["all"]``.
ansible_search_path
Current search path for action plugins and lookups, in other words, where we search for relative paths when you do ``template: src=myfile``
ansible_skip_tags
Contents of the ``--skip-tags`` CLI option, which specifies which tags will be skipped for the current run.
ansible_verbosity
Current verbosity setting for Ansible
ansible_version
Dictionary/map that contains information about the current running version of ansible, it has the following keys: full, major, minor, revision and string.
group_names
List of groups the current host is part of
groups
A dictionary/map with all the groups in inventory and each group has the list of hosts that belong to it
hostvars
A dictionary/map with all the hosts in inventory and variables assigned to them
inventory_hostname
The inventory name for the 'current' host being iterated over in the play
inventory_hostname_short
The short version of `inventory_hostname`
inventory_dir
The directory of the inventory source in which the `inventory_hostname` was first defined
inventory_file
The file name of the inventory source in which the `inventory_hostname` was first defined
omit
Special variable that allows you to 'omit' an option in a task, for example ``- user: name=bob home={{ bobs_home|default(omit) }}``
play_hosts
Deprecated, the same as ansible_play_batch
ansible_play_name
The name of the currently executed play. Added in ``2.8``. (`name` attribute of the play, not file name of the playbook.)
playbook_dir
The path to the directory of the current playbook being executed. NOTE: This might be different than directory of the playbook passed to the ``ansible-playbook`` command line when a playbook contains a ``import_playbook`` statement.
role_name
The name of the role currently being executed.
role_names
Deprecated, the same as ansible_play_role_names
role_path
The path to the dir of the currently running role
Facts
-----
These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. See :ref:`vars_and_facts` for more information.
ansible_facts
Contains any facts gathered or cached for the `inventory_hostname`
Facts are normally gathered by the :ref:`setup <setup_module>` module automatically in a play, but any module can return facts.
ansible_local
Contains any 'local facts' gathered or cached for the `inventory_hostname`.
The keys available depend on the custom facts created.
See the :ref:`setup <setup_module>` module and :ref:`local_facts` for more details.
.. _connection_variables:
Connection variables
---------------------
Connection variables are normally used to set the specifics on how to execute actions on a target. Most of them correspond to connection plugins, but not all are specific to them; other plugins like shell, terminal and become are normally involved.
Only the common ones are described as each connection/become/shell/etc plugin can define its own overrides and specific variables.
See :ref:`general_precedence_rules` for how connection variables interact with :ref:`configuration settings<ansible_configuration_settings>`, :ref:`command-line options<command_line_tools>`, and :ref:`playbook keywords<playbook_keywords>`.
ansible_become_user
The user Ansible 'becomes' after using privilege escalation. This must be available to the 'login user'.
ansible_connection
The connection plugin actually used for the task on the target host.
ansible_host
The ip/name of the target host to use instead of `inventory_hostname`.
ansible_python_interpreter
The path to the Python executable Ansible should use on the target host.
ansible_user
The user Ansible 'logs in' as.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/collection_contributors/collection_requirements.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/collection_contributors/collection_reviewing.rst
|
.. _review_checklist:
Review checklist for collection PRs
====================================
Use this section as a checklist reminder of items to review when you review a collection PR.
Reviewing bug reports
----------------------
When users report bugs, verify the behavior reported. Remember always to be kind with your feedback.
* Did the user make a mistake in the code they put in the Steps to Reproduce issue section? We often see user errors reported as bugs.
* Did the user assume an unexpected behavior? Ensure that the related documentation is clear. If not, the issue is useful to help us improve documentation.
* Is there a minimal reproducer? If not, ask the reporter to reduce the complexity to help pinpoint the issue.
* Is the issue a consequence of a misconfigured environment?
* If it seems to be a real bug, does the behaviour still exist in the most recent release or the development branch?
* Reproduce the bug, or if you do not have a suitable infrastructure, ask other contributors to reproduce the bug.
Reviewing suggested changes
---------------------------
When reviewing PRs, verify that the suggested changes do not:
* Unnecessarily break backward compatibility.
* Bring more harm than value.
* Introduce non-idempotent solutions.
* Duplicate already existing features (inside or outside the collection).
* Violate the :ref:`Ansible development conventions <module_conventions>`.
Other standards to check for in a PR include:
* A pull request MUST NOT contain a mix of bug fixes and new features that are not tightly related. If yes, ask the author to split the pull request into separate PRs.
* If the pull request is not a documentation fix, it must include a :ref:`changelog fragment <collection_changelog_fragments>`. Check the format carefully as follows:
* New modules and plugins (that are not jinja2 filter and test plugins) do not need changelog fragments.
* For jinja2 filter and test plugins, check out the `special syntax for changelog fragments <https://github.com/ansible-community/antsibull-changelog/blob/main/docs/changelogs.rst#adding-new-roles-playbooks-test-and-filter-plugins>`_.
* The changelog content contains useful information for end users of the collection.
* If new files are added with the pull request, they follow the `licensing rules <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#licensing>`_.
* The changes follow the :ref:`Ansible documentation standards <developing_modules_documenting>` and the :ref:`style_guide`.
* The changes follow the :ref:`Development conventions <developing_modules_best_practices>`.
* If a new plugin is added, it is one of the `allowed plugin types <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#modules-plugins>`_.
* Documentation, examples, and return sections use FQCNs for the ``M(..)`` :ref:`format macros <module_documents_linking>` when referring to modules.
* Modules and plugins from ansible-core use ``ansible.builtin.`` as an FQCN prefix when mentioned.
* When a new option, module, plugin, or return value is added, the corresponding documentation or return sections use ``version_added:`` containing the *collection* version in which they will be first released.
* This is typically the next minor release, sometimes the next major release. For example: if 2.7.5 is the current release, the next minor release will be 2.8.0, and the next major release will be 3.0.0).
* FQCNs are used for ``extends_documentation_fragment:``, unless the author is referring to doc_fragments from ansible-core.
* New features have corresponding examples in the :ref:`examples_block`.
* Return values are documented in the :ref:`return_block`.
Review tests in the PR
----------------------
Review the following if tests are applicable and possible to implement for the changes included in the PR:
* Where applicable, the pull request has :ref:`testing_integration` and :ref:`testing_units`.
* All changes are covered. For example, a bug case or a new option separately and in sensible combinations with other options.
* Integration tests cover ``check_mode`` if supported.
* Integration tests check the actual state of the system, not only what the module reports. For example, if the module actually changes a file, check that the file was changed by using the ``ansible.builtin.stat`` module..
* Integration tests check return values, if applicable.
Review for merge commits and breaking changes
---------------------------------------------
* The pull request does not contain merge commits. See the GitHub warnings at the bottom of the pull request. If merge commits are present, ask the author to rebase the pull request branch.
* If the pull request contains breaking changes, ask the author and the collection maintainers if it really is needed, and if there is a way not to introduce breaking changes. If breaking changes are present, they MUST only appear in the next major release and MUST NOT appear in a minor or patch release. The only exception is breaking changes caused by security fixes that are absolutely necessary to fix the security issue.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/collection_contributors/collection_unit_tests.rst
|
.. _collection_unit_tests:
******************************
Add unit tests to a collection
******************************
This section describes all of the steps needed to add unit tests to a collection and how to run them locally using the ``ansible-test`` command.
See :ref:`testing_units_modules` for more details.
.. contents::
:local:
Understanding the purpose of unit tests
========================================
Unit tests ensure that a section of code (known as a ``unit``) meets its design requirements and behaves as intended. Some collections do not have unit tests but it does not mean they are not needed.
A ``unit`` is a function or method of a class used in a module or plugin. Unit tests verify that a function with a certain input returns the expected output.
Unit tests should also verify when a function raises or handles exceptions.
Ansible uses `pytest <https://docs.pytest.org/en/latest/>`_ as a testing framework.
See :ref:`testing_units_modules` for complete details.
Inclusion in the Ansible package `requires integration and/or unit tests <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#requirements-for-collections-to-be-included-in-the-ansible-package>`_ You should have tests for your collection as well as for individual modules and plugins to make your code more reliable To learn how to get started with integration tests, see :ref:`collection_integration_tests`.
See :ref:`collection_prepare_local` to prepare your environment.
.. _collection_unit_test_required:
Determine if unit tests exist
=============================
Ansible collection unit tests are located in the ``tests/units`` directory.
The structure of the unit tests matches the structure of the code base, so the tests can reside in the ``tests/units/plugins/modules/`` and ``tests/units/plugins/module_utils`` directories. There can be sub-directories, if modules are organized by module groups.
If you are adding unit tests for ``my_module`` for example, check to see if the tests already exist in the collection source tree with the path ``tests/units/plugins/modules/test_my_module.py``.
Example of unit tests
=====================
Let's assume that the following function is in ``my_module`` :
.. code:: python
def convert_to_supported(val):
"""Convert unsupported types to appropriate."""
if isinstance(val, decimal.Decimal):
return float(val)
if isinstance(val, datetime.timedelta):
return str(val)
if val == 42:
raise ValueError("This number is just too cool for us ;)")
return val
Unit tests for this function should, at a minimum, check the following:
* If the function gets a ``Decimal`` argument, it returns a corresponding ``float`` value.
* If the function gets a ``timedelta`` argument, it returns a corresponding ``str`` value.
* If the function gets ``42`` as an argument, it raises a ``ValueError``.
* If the function gets an argument of any other type, it does nothing and returns the same value.
To write these unit tests in collection is called ``community.mycollection``:
1. If you already have your local environment :ref:`prepared <collection_prepare_local>`, go to the collection root directory.
.. code:: bash
cd ~/ansible_collection/community/mycollection
2. Create a test file for ``my_module``. If the path does not exist, create it.
.. code:: bash
touch tests/units/plugins/modules/test_my_module.py
3. Add the following code to the file:
.. code:: python
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from datetime import timedelta
from decimal import Decimal
import pytest
from ansible_collections.community.mycollection.plugins.modules.my_module import (
convert_to_supported,
)
# We use the @pytest.mark.parametrize decorator to parametrize the function
# https://docs.pytest.org/en/latest/how-to/parametrize.html
# Simply put, the first element of each tuple will be passed to
# the test_convert_to_supported function as the test_input argument
# and the second element of each tuple will be passed as
# the expected argument.
# In the function's body, we use the assert statement to check
# if the convert_to_supported function given the test_input,
# returns what we expect.
@pytest.mark.parametrize('test_input, expected', [
(timedelta(0, 43200), '12:00:00'),
(Decimal('1.01'), 1.01),
('string', 'string'),
(None, None),
(1, 1),
])
def test_convert_to_supported(test_input, expected):
assert convert_to_supported(test_input) == expected
def test_convert_to_supported_exception():
with pytest.raises(ValueError, match=r"too cool"):
convert_to_supported(42)
See :ref:`testing_units_modules` for examples on how to mock ``AnsibleModule`` objects, monkeypatch methods (``module.fail_json``, ``module.exit_json``), emulate API responses, and more.
4. Run the tests using docker:
.. code:: bash
ansible-test units tests/unit/plugins/modules/test_my_module.py --docker
.. _collection_recommendation_unit:
Recommendations on coverage
===========================
Use the following tips to organize your code and test coverage:
* Make your functions simple. Small functions that do one thing with no or minimal side effects are easier to test.
* Test all possible behaviors of a function including exception related ones such as raising, catching and handling exceptions.
* When a function invokes the ``module.fail_json`` method, passed messages should also be checked.
.. seealso::
:ref:`testing_units_modules`
Unit testing Ansible modules
:ref:`developing_testing`
Ansible Testing Guide
:ref:`collection_integration_tests`
Integration testing for collections
:ref:`testing_integration`
Integration tests guide
:ref:`testing_collections`
Testing collections
:ref:`testing_resource_modules`
Resource module integration tests
:ref:`collection_pr_test`
How to test a pull request locally
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/contributions_collections.rst
|
.. _collections_contributions:
*************************************
Ansible Collections Contributor Guide
*************************************
.. toctree::
:maxdepth: 2
collection_development_process
reporting_collections
create_pr_quick_start
collection_contributors/test_index
collection_contributors/collection_reviewing
maintainers
contributing_maintained_collections
steering/steering_index
documentation_contributions
other_tools_and_programs
If you have a specific Ansible interest or expertise (for example, VMware, Linode, and so on, consider joining a :ref:`working group <working_group_list>`.
Working with the Ansible collection repositories
=================================================
* How can I find :ref:`editors, linters, and other tools <other_tools_and_programs>` that will support my Ansible development efforts?
* Where can I find guidance on :ref:`coding in Ansible <developer_guide>`?
* How do I :ref:`create a collection <developing_modules_in_groups>`?
* How do I :ref:`rebase my PR <rebase_guide>`?
* How do I learn about Ansible's :ref:`testing (CI) process <developing_testing>`?
* How do I :ref:`deprecate a module <deprecating_modules>`?
* See `Collection developer tutorials <https://www.ansible.com/products/ansible-community-training>`_ for a quick introduction on how to develop and test your collection contributions.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/maintainers_guidelines.rst
|
.. _maintainer_requirements:
Maintainer responsibilities
===========================
.. contents::
:depth: 1
:local:
An Ansible collection maintainer is a contributor trusted by the community who makes significant and regular contributions to the project and who has shown themselves as a specialist in the related area.
Collection maintainers have :ref:`extended permissions<collection_maintainers>` in the collection scope.
Ansible collection maintainers provide feedback, responses, or actions on pull requests or issues to the collection(s) they maintain in a reasonably timely manner. They can also update the contributor guidelines for that collection, in collaboration with the Ansible community team and the other maintainers of that collection.
In general, collection maintainers:
- Act in accordance with the :ref:`code_of_conduct`.
- Subscribe to the collection repository they maintain (click :guilabel:`Watch > All activity` in GitHub).
- Keep README, development guidelines, and other general collections :ref:`maintainer_documentation` relevant.
- Review and commit changes made by other contributors.
- :ref:`Backport <Backporting>` changes to stable branches.
- Address or assign issues to appropriate contributors.
- :ref:`Release collections <Releasing>`.
- Ensure that collections adhere to the `Collection Requirements <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst>`_,
- Track changes announced in `News for collection contributors and maintainers <https://github.com/ansible-collections/news-for-maintainers>`_ and update a collection in accordance with these changes.
- Subscribe and submit news to the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_.
- :ref:`Build a healthy community <expanding_community>` to increase the number of active contributors and maintainers around collections.
- Revise these guidelines to improve the maintainer experience for yourself and others.
Multiple maintainers can divide responsibilities among each other.
How to become a maintainer
--------------------------
A person interested in becoming a maintainer and satisfying the :ref:`requirements<maintainer_requirements>` may either self-nominate or be nominated by another maintainer.
To nominate a candidate, create a GitHub issue in the relevant collection repository. If there is no response, the repository is not actively maintained, or the current maintainers do not have permissions to add the candidate, please create the issue in the `ansible/community <https://github.com/ansible/community>`_ repository.
Communicating as a collection maintainer
-----------------------------------------
Maintainers MUST subscribe to the `"Changes impacting collection contributors and maintainers" GitHub repo <https://github.com/ansible-collections/news-for-maintainers>`_ and the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_. If you have something important to announce through the newsletter (for example, recent releases), see the `Bullhorn's wiki page <https://github.com/ansible/community/wiki/News#the-bullhorn>`_ to learn how.
Collection contributors and maintainers should also communicate through:
* :ref:`communication_irc` appropriate to their collection, or if none exists, the general community and developer chat channels
* Mailing lists such as `ansible-announce <https://groups.google.com/d/forum/ansible-announce>`_ and `ansible-devel <https://groups.google.com/d/forum/ansible-devel>`_
* Collection project boards, issues, and GitHub discussions in corresponding repositories
* Quarterly Contributor Summits.
* Ansiblefest and local meetups.
See :ref:`communication` for more details on these communication channels.
.. _wg_and_real_time_chat:
Establishing working group communication
----------------------------------------------------------------
Working groups depend on efficient, real-time communication.
Project maintainers can use the following techniques to establish communication for working groups:
* Find an existing :ref:`working_group_list` that is similar to your project and join the conversation.
* `Request <https://github.com/ansible/community/blob/main/WORKING-GROUPS.md>`_ a new working group for your project.
* `Create <https://hackmd.io/@ansible-community/community-matrix-faq#How-do-I-create-a-public-community-room>`_ a public chat for your working group or `ask <https://github.com/ansible/community/issues/new>`_ the community team.
* Provide working group details and links to chat rooms in the contributor section of your project ``README.md``.
* Encourage contributors to join the chats and add themselves to the working group.
See the :ref:`Communication guide <communication_irc>` to learn more about real-time chat.
Community Topics
----------------
The Community and the `Steering Committee <https://docs.ansible.com/ansible/devel/community/steering/community_steering_committee.html>`_ asynchronously discuss and vote on the `Community Topics <https://github.com/ansible-community/community-topics/issues>`_ which impact the whole project or its parts including collections and packaging.
Share your opinion and vote on the topics to help the community make the best decisions.
.. _expanding_community:
Contributor Summits
-------------------
The quarterly Ansible Contributor Summit is a global event that provides our contributors a great opportunity to meet each other, communicate, share ideas, and see that there are other real people behind the messages on Matrix or Libera Chat IRC, or GitHub. This gives a sense of community. Watch the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_ for information when the next contributor summit, invite contributors you know, and take part in the event together.
Weekly community Matrix/IRC meetings
------------------------------------
The Community and the Steering Committee come together at weekly meetings in the ``#ansible-community`` `Libera.Chat IRC <https://docs.ansible.com/ansible/devel/community/communication.html#ansible-community-on-irc>`_ channel or in the bridged `#community:ansible.com <https://matrix.to/#/#community:ansible.com>`_ room on `Matrix <https://docs.ansible.com/ansible/devel/community/communication.html#ansible-community-on-matrix>`_ to discuss important project questions. Join us! Here is our `schedule <https://github.com/ansible/community/blob/main/meetings/README.md#schedule>`_.
Expanding the collection community
===================================
.. note::
If you discover good ways to expand a community or make it more robust, edit this section with your ideas to share with other collection maintainers.
Here are some ways you can expand the community around your collection:
* Give :ref:`newcomers a positive first experience <collection_new_contributors>`.
* Invite contributors to join :ref:`real-time chats <wg_and_real_time_chat>` related to your project.
* Have :ref:`good documentation <maintainer_documentation>` with guidelines for new contributors.
* Make people feel welcome personally and individually.
* Use labels to show easy fixes and leave non-critical easy fixes to newcomers and offer to mentor them.
* Be responsive in issues, PRs and other communication.
* Conduct PR days regularly.
* Maintain a zero-tolerance policy towards behavior violating the :ref:`code_of_conduct`.
* Put information about how people can register code of conduct violations in your ``README`` and ``CONTRIBUTING`` files.
* Include quick ways contributors can help and other documentation in your ``README``.
* Add and keep updated the ``CONTRIBUTORS`` and ``MAINTAINERS`` files.
* Create a pinned issue to announce that the collection welcomes new maintainers and contributors.
* Look for new maintainers among active contributors.
* Announce that your collection welcomes new maintainers.
* Take part and congratulate new maintainers in Contributor Summits.
.. _collection_new_contributors:
Encouraging new contributors
-----------------------------
Easy-fix items are the best way to attract and mentor new contributors. You should triage incoming issues to mark them with labels such as ``easyfix``, ``waiting_on_contributor``, and ``docs``. where appropriate. Do not fix these trivial non-critical bugs yourself. Instead, mentor a person who wants to contribute.
For some easy-fix issues, you could ask the issue reporter whether they want to fix the issue themselves providing the link to a quick start guide for creating PRs.
Conduct pull request days regularly. You could plan PR days, for example, on the last Friday of every month when you and other maintainers go through all open issues and pull requests focusing on old ones, asking people if you can help, and so on. If there are pull requests that look abandoned (for example, there is no response on your help offers since the previous PR day), announce that anyone else interested can complete the pull request.
Promote active contributors satisfying :ref:`requirements<maintainer_requirements>` to maintainers. Revise contributors' activity regularly.
If your collection found new maintainers, announce that fact in the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_ and during the next Contributor Summit congratulating and thanking them for the work done. You can mention all the people promoted since the previous summit. Remember to invite the other maintainers to the Summit in advance.
Some other general guidelines to encourage contributors:
* Welcome the author and thank them for the issue or pull request.
* If there is a non-crucial easy-fix bug reported, politely ask the author to fix it themselves providing a link to :ref:`collection_quickstart`.
* When suggesting changes, try to use questions, not statements.
* When suggesting mandatory changes, do it as politely as possible providing documentation references.
* If your suggestion is optional or a matter of personal preference, please say it explicitly.
* When asking for adding tests or for complex code refactoring, say that the author is welcome to ask for clarifications and help if they need it.
* If somebody suggests a good idea, mention it or put a thumbs up.
* After merging, thank the author and reviewers for their time and effort.
See the :ref:`review_checklist` for a list of items to check before you merge a PR.
.. _maintainer_documentation:
Maintaining good collection documentation
==========================================
Maintainers look after the collection documentation to ensure it matches the :ref:`style_guide`. This includes keeping the following documents accurate and updated regularly:
* Collection module and plugin documentation that adheres to the :ref:`Ansible documentation format <module_documenting>`.
* Collection user guides that follow the :ref:`Collection documentation format <collections_doc_dir>`.
* Repository files that includes at least a ``README`` and ``CONTRIBUTING`` file.
A good ``README`` includes a description of the collection, a link to the :ref:`code_of_conduct`, and details on how to contribute or a pointer to the ``CONTRIBUTING`` file. If your collection is a part of Ansible (is shipped with Ansible package), highlight that fact at the top of the collection's ``README``.
The ``CONTRIBUTING`` file includes all the details or links to the details on how a new or continuing contributor can contribute to this collection. The ``CONTRIBUTING`` file should include:
* Information or links to new contributor guidelines, such as a quick start on opening PRs.
* Information or links to contributor requirements, such as unit and integration test requirements.
You can optionally include a ``CONTRIBUTORS`` and ``MAINTAINERS`` file to list the collection contributors and maintainers.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/maintainers_workflow.rst
|
.. _maintainers_workflow:
Backporting and Ansible inclusion
==================================
Each collection community can set its own rules and workflow for managing pull requests, bug reports, documentation issues, and feature requests, as well as adding and replacing maintainers. Maintainers review and merge pull requests following the:
* :ref:`code_of_conduct`
* :ref:`maintainer_requirements`
* :ref:`Committer guidelines <committer_general_rules>`
* :ref:`PR review checklist<review_checklist>`
There can be two kinds of maintainers: :ref:`collection_maintainers` and :ref:`module_maintainers`.
.. _collection_maintainers:
Collection maintainers
----------------------
Collection-scope maintainers are contributors who have the ``write`` or higher access level in a collection. They have commit rights and can merge pull requests, among other permissions.
When a collection maintainer considers a contribution to a file significant enough
(for example, fixing a complex bug, adding a feature, providing regular reviews, and so on),
they can invite the author to become a module maintainer.
.. _module_maintainers:
Module maintainers
------------------
Module-scope maintainers exist in collections that have the `collection bot <https://github.com/ansible-community/collection_bot>`_,
for example, `community.general <https://github.com/ansible-collections/community.general>`_
and `community.network <https://github.com/ansible-collections/community.network>`_.
Being a module maintainer is the stage prior to becoming a collection maintainer. Module maintainers are contributors who are listed in ``.github/BOTMETA.yml``. The scope can be any file (for example, a module or plugin), directory, or repository. Because in most cases the scope is a module or group of modules, we call these contributors module maintainers. The collection bot notifies module maintainers when issues/pull requests related to files they maintain are created.
Module maintainers have indirect commit rights implemented through the `collection bot <https://github.com/ansible-community/collection_bot>`_.
When two module maintainers comment with the keywords ``shipit``, ``LGTM``, or ``+1`` on a pull request
which changes a module they maintain, the collection bot merges the pull request automatically.
For more information about the collection bot and its interface,
see to the `Collection bot overview <https://github.com/ansible-community/collection_bot/blob/main/ISSUE_HELP.md>`_.
Releasing a collection
----------------------
Collection maintainers are responsible for releasing new versions of a collection. Generally, releasing a collection consists of:
#. Planning and announcement.
#. Generating a changelog.
#. Creating a release git tag and pushing it.
#. Automatically publishing the release tarball on `Ansible Galaxy <https://galaxy.ansible.com/>`_ through the `Zuul dashboard <https://dashboard.zuul.ansible.com/t/ansible/builds?pipeline=release>`_.
#. Final announcement.
#. Optionally, `file a request to include a new collection into the Ansible package <https://github.com/ansible-collections/ansible-inclusion>`_.
See :ref:`releasing_collections` for details.
.. _Backporting:
Backporting
------------
Collection maintainers backport merged pull requests to stable branches
following the `semantic versioning <https://semver.org/>`_ and release policies of the collections.
The manual backport process is similar to the :ref:`ansible-core backporting guidelines <backport_process>`.
For convenience, backporting can be implemented automatically using GitHub bots (for example, with the `Patchback app <https://github.com/apps/patchback>`_) and labeling as it is done in `community.general <https://github.com/ansible-collections/community.general>`_ and `community.network <https://github.com/ansible-collections/community.network>`_.
.. _including_collection_ansible:
Including a collection in Ansible
-----------------------------------
If a collection is not included in Ansible (not shipped with Ansible package), maintainers can submit the collection for inclusion by creating a discussion under the `ansible-collections/ansible-inclusion repository <https://github.com/ansible-collections/ansible-inclusion>`_. For more information, see the `repository's README <https://github.com/ansible-collections/ansible-inclusion/blob/main/README.md>`_, and the `Ansible community package collections requirements <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst>`.
Stepping down as a collection maintainer
===========================================
Times change, and so may your ability to continue as a collection maintainer. We ask that you do not step down silently.
If you feel you don't have time to maintain your collection anymore, you should:
- Inform other maintainers about it.
- If the collection is under the ``ansible-collections`` organization, also inform relevant :ref:`communication_irc`, the ``community`` chat channels on IRC or matrix, or by email ``[email protected]``.
- Look at active contributors in the collection to find new maintainers among them. Discuss the potential candidates with other maintainers or with the community team.
- If you failed to find a replacement, create a pinned issue in the collection, announcing that the collection needs new maintainers.
- Make the same announcement through the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_.
- Please be around to discuss potential candidates found by other maintainers or by the community team.
Remember, this is a community, so you can come back at any time in the future.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/rst/community/steering/community_steering_committee.rst
|
.. _steering_responsibilities:
Steering Committee mission and responsibilities
===============================================
The Steering Committee mission is to provide continuity, guidance, and suggestions to the Ansible community to ensure the delivery and high quality of the Ansible package. In addition, the committee helps decide the technical direction of the Ansible project. It is responsible for approving new proposals and policies in the community, package, and community collections world, new community collection-inclusion requests, and other technical aspects regarding inclusion and packaging.
The Committee should reflect the scope and breadth of the Ansible community.
Steering Committee responsibilities
------------------------------------
The Committee:
* Designs policies and procedures for the community collections world.
* Votes on approval changes to established policies and procedures.
* Reviews community collections for compliance with the policies.
* Helps create and define roadmaps for our deliverables such as the ``ansible`` package, major community collections, and documentation.
* Reviews community collections submitted for inclusion in the Ansible package and decides whether to include them or not.
* Review other proposals of importance that need the Committee's attention and provide feedback.
.. _steering_members:
Current Steering Committee members
-----------------------------------
The following table lists the current Steering Committee members. See :ref:`steering_past_members` for a list of past members.
.. table:: Current Steering committee members
+------------------+---------------+-------------+
| Name | GitHub | Start year |
+==================+===============+=============+
| Alexei Znamensky | russoz | 2022 |
+------------------+---------------+-------------+
| Alicia Cozine | acozine | 2021 |
+------------------+---------------+-------------+
| Andrew Klychkov | Andersson007 | 2021 |
+------------------+---------------+-------------+
| Brad Thornton | cidrblock | 2021 |
+------------------+---------------+-------------+
| Brian Scholer | briantist | 2022 |
+------------------+---------------+-------------+
| Dylan Silva | thaumos | 2021 |
+------------------+---------------+-------------+
| Felix Fontein | felixfontein | 2021 |
+------------------+---------------+-------------+
| James Cassell | jamescassell | 2021 |
+------------------+---------------+-------------+
| John Barker | gundalow | 2021 |
+------------------+---------------+-------------+
| Mario Lenz | mariolenz | 2022 |
+------------------+---------------+-------------+
| Markus Bergholz | markuman | 2022 |
+------------------+---------------+-------------+
| Maxwell G | gotmax23 | 2022 |
+------------------+---------------+-------------+
| Sorin Sbarnea | ssbarnea | 2021 |
+------------------+---------------+-------------+
John Barker (`gundalow <https://github.com/gundalow>`_) has been elected by the Committee as its :ref:`chairperson`.
Committee members are selected based on their active contribution to the Ansible Project and its community. See :ref:`community_steering_guidelines` to learn details.
Creating new policy proposals & inclusion requests
----------------------------------------------------
The Committee uses the `community-topics repository <https://github.com/ansible-community/community-topics/issues>`_ to asynchronously discuss with the Community and vote on Community topics in corresponding issues.
You can create a new issue in the `community-topics repository <https://github.com/ansible-community/community-topics/issues>`_ as a discussion topic if you want to discuss an idea that impacts any of the following:
* Ansible Community
* Community collection best practices and requirements
* Community collection inclusion policy
* The Community governance
* Other proposals of importance that need the Committee's or overall Ansible community attention
To request changes to the inclusion policy and collection requirements:
#. Submit a new pull request to the `ansible-collections/overview <https://github.com/ansible-collections/overview>`_ repository.
#. Create a corresponding issue containing the rationale behind these changes in the `community-topics repository <https://github.com/ansible-community/community-topics/issues>`_ repository.
To submit new collections for inclusion into the Ansible package:
* Submit the new collection inclusion requests through a new discussion in the `ansible-inclusion <https://github.com/ansible-collections/ansible-inclusion/discussions/new>`_ repository.
Depending on a topic you want to discuss with the Community and the Committee, as you prepare your proposal, please consider the requirements established by:
* :ref:`code_of_conduct`.
* `Ansible Collection Requirements <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst>`_.
* `Ansible Collection Inclusion Checklist <https://github.com/ansible-collections/overview/blob/main/collection_checklist.md>`_.
Community topics workflow
^^^^^^^^^^^^^^^^^^^^^^^^^
The Committee uses the `Community-topics workflow <https://github.com/ansible-community/community-topics/blob/main/community_topics_workflow.md>`_ to asynchronously discuss and vote on the `community-topics <https://github.com/ansible-community/community-topics/issues>`_.
The quorum, the minimum number of Committee members who must vote on a topic in order for a decision to be officially made, is half of the whole number of the Committee members. If the quorum number contains a fractional part, it is rounded up to the next whole number. For example, if there are thirteen members currently in the committee, the quorum will be seven.
Votes must always have "no change" as an option.
In case of equal numbers of votes for and against a topic, the chairperson's vote will break the tie. For example, if there are six votes for and six votes against a topic, and the chairperson's vote is among those six which are for the topic, the final decision will be positive. If the chairperson has not voted yet, other members ask them to vote.
For votes with more than two options, one choice must have at least half of the votes. If two choices happen to both have half of the votes, the chairperson's vote will break the tie. If no choice has at least half of the votes, the vote choices have to be adjusted so that a majority can be found for a choice in a new vote.
Community topics triage
^^^^^^^^^^^^^^^^^^^^^^^
The Committee conducts a triage of `community topics <https://github.com/ansible-community/community-topics/issues>`_ periodically (every three to six months).
The triage goals are:
* Sparking interest for forgotten topics.
* Identifying and closing irrelevant topics, for example, when the reason of the topic does not exist anymore or the topic is out of the Committee responsibilities scope.
* Identifying and closing topics that the Community are not interested in discussing. As indicators, it can be absence of comments or no activity in comments, at least, for the last six months.
* Identifying and closing topics that were solved and implemented but not closed (in this case, such a topic can be closed on the spot with a comment that it has been implemented).
* Identifying topics that have been in pending state for a long time, for example, when it is waiting for actions from someone for several months or when the topics were solved but not implemented.
A person starting the triage:
#. Identifies the topics mentioned above.
#. Creates a special triage topic containing an enumerated list of the topics-candidates for closing.
#. Establishes a vote date considering a number of topics, their complexity and comment-history size giving the Community sufficient time to go through and discuss them.
#. The Community and the Committee vote on each topic-candidate listed in the triage topic whether to close it or keep it open.
Collection inclusion requests workflow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When reviewing community collection `inclusion requests <https://github.com/ansible-collections/ansible-inclusion/discussions>`_, the Committee members check if a collection adheres to the `Community collection requirements <https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst>`_.
#. A Committee member who conducts the inclusion review copies the `Ansible community collection checklist <https://github.com/ansible-collections/overview/blob/main/collection_checklist.md>`_ into a corresponding `discussion <https://github.com/ansible-collections/ansible-inclusion/discussions>`_.
#. In the course of the review, the Committee member marks items as completed or leaves a comment saying whether the reviewer expects an issue to be addressed or whether it is optional (for example, it could be **MUST FIX:** <what> or **SHOULD FIX:** <what> under an item).
#. For a collection to be included in the Ansible community package, the collection:
* MUST be reviewed and approved by at least two persons, where at least one person is a Steering Committee member.
* For a Non-Steering Committee review to be counted for inclusion, it MUST be checked and approved by *another* Steering Committee member.
* Reviewers must not be involved significantly in development of the collection. They must declare any potential conflict of interest (for example, being friends/relatives/coworkers of the maintainers/authors, being users of the collection, or having contributed to that collection recently or in the past).
#. After the collection gets two or more Committee member approvals, a Committee member creates a `community topic <https://github.com/ansible-community/community-topics/issues>`_ linked to the corresponding inclusion request. The issue's description says that the collection has been approved by two or more Committee members and establishes a date (a week by default) when the inclusion decision will be considered made. This time period can be used to raise concerns.
#. If no objections are raised up to the established date, the inclusion request is considered successfully resolved. In this case, a Committee member:
#. Declares the decision in the topic and in the inclusion request.
#. Moves the request to the ``Resolved reviews`` category.
#. Adds the collection to the ``ansible.in`` file in a corresponding directory of the `ansible-build-data repository <https://github.com/ansible-community/ansible-build-data>`_.
#. Announces the inclusion through the `Bullhorn newsletter <https://github.com/ansible/community/wiki/News#the-bullhorn>`_.
#. Closes the topic.
Community Working Group meetings
---------------------------------
See the Community Working Group meeting `schedule <https://github.com/ansible/community/blob/main/meetings/README.md#wednesdays>`_. Meeting summaries are posted in the `Community Working Group Meeting Agenda <https://github.com/ansible/community/issues?q=is%3Aopen+label%3Ameeting_agenda+label%3Acommunity+>`_ issue.
.. note::
Participation in the Community Working Group meetings is optional for Committee members. Decisions on community topics are made asynchronously in the `community-topics <https://github.com/ansible-community/community-topics/issues>`_ repository.
The meeting minutes can be found at the `fedora meetbot site <https://meetbot.fedoraproject.org/sresults/?group_id=ansible-community&type=channel>`_ and the same is posted to `Ansible Devel Mailing List <https://groups.google.com/g/ansible-devel>`_ after every meeting.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/sphinx_conf/core_conf.py
|
# -*- coding: utf-8 -*-
#
# documentation build configuration file, created by
# sphinx-quickstart on Sat Sep 27 13:23:22 2008-2009.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed
# automatically).
#
# All configuration values have a default value; values that are commented out
# serve to show the default value.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import os
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
# sys.path.append(os.path.abspath('some/directory'))
#
sys.path.insert(0, os.path.join('ansible', 'lib'))
# We want sphinx to document the ansible modules contained in this repository,
# not those that may happen to be installed in the version
# of Python used to run sphinx. When sphinx loads in order to document,
# the repository version needs to be the one that is loaded:
sys.path.insert(0, os.path.abspath(os.path.join('..', '..', '..', 'lib')))
VERSION = 'devel'
AUTHOR = 'Ansible, Inc'
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings.
# They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# TEST: 'sphinxcontrib.fulltoc'
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'notfound.extension',
'sphinx_antsibull_ext', # provides CSS for the plugin/module docs generated by antsibull
]
# Later on, add 'sphinx.ext.viewcode' to the list if you want to have
# colorized code generated too for references.
# Add any paths that contain templates here, relative to this directory.
templates_path = ['../.templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
root_doc = master_doc = 'index' # Sphinx 4+ / 3-
# General substitutions.
project = 'Ansible'
copyright = "Ansible project contributors"
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
# The short X.Y version.
version = VERSION
# The full version, including alpha/beta/rc tags.
release = VERSION
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
# unused_docs = []
# List of directories, relative to source directories, that shouldn't be
# searched for source files.
# exclude_dirs = []
# A list of glob-style patterns that should be excluded when looking
# for source files.
exclude_patterns = [
'2.10_index.rst',
'ansible_index.rst',
'core_index.rst',
'network',
'scenario_guides',
'community/collection_contributors/test_index.rst',
'community/collection_contributors/collection_integration_about.rst',
'community/collection_contributors/collection_integration_updating.rst',
'community/collection_contributors/collection_integration_add.rst',
'community/collection_contributors/collection_test_pr_locally.rst',
'community/collection_contributors/collection_integration_tests.rst',
'community/collection_contributors/collection_integration_running.rst',
'community/collection_contributors/collection_reviewing.rst',
'community/collection_contributors/collection_unit_tests.rst',
'community/maintainers.rst',
'community/contributions_collections.rst',
'community/create_pr_quick_start.rst',
'community/reporting_collections.rst',
'community/contributing_maintained_collections.rst',
'community/collection_development_process.rst',
'community/collection_contributors/collection_release_without_branches.rst',
'community/collection_contributors/collection_release_with_branches.rst',
'community/collection_contributors/collection_releasing.rst',
'community/maintainers_guidelines.rst',
'community/maintainers_workflow.rst',
'community/steering/community_steering_committee.rst',
'community/steering/steering_committee_membership.rst',
'community/steering/steering_committee_past_members.rst',
'community/steering/steering_index.rst',
'dev_guide/ansible_index.rst',
'dev_guide/core_index.rst',
'dev_guide/platforms/aws_guidelines.rst',
'dev_guide/platforms/openstack_guidelines.rst',
'dev_guide/platforms/ovirt_dev_guide.rst',
'dev_guide/platforms/vmware_guidelines.rst',
'dev_guide/platforms/vmware_rest_guidelines.rst',
'porting_guides/porting_guides.rst',
'porting_guides/porting_guide_[1-9]*',
'roadmap/index.rst',
'roadmap/ansible_roadmap_index.rst',
'roadmap/old_roadmap_index.rst',
'roadmap/ROADMAP_2_5.rst',
'roadmap/ROADMAP_2_6.rst',
'roadmap/ROADMAP_2_7.rst',
'roadmap/ROADMAP_2_8.rst',
'roadmap/ROADMAP_2_9.rst',
'roadmap/COLLECTIONS*'
]
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'ansible'
highlight_language = 'YAML+Jinja'
# Substitutions, variables, entities, & shortcuts for text which do not need to link to anything.
# For titles which should be a link, use the intersphinx anchors set at the index, chapter, and section levels, such as qi_start_:
# |br| is useful for formatting fields inside of tables
# |_| is a nonbreaking space; similarly useful inside of tables
rst_epilog = """
.. |br| raw:: html
<br>
.. |_| unicode:: 0xA0
:trim:
"""
# Options for HTML output
# -----------------------
html_theme_path = []
html_theme = 'sphinx_ansible_theme'
html_show_sphinx = False
html_theme_options = {
'canonical_url': "https://docs.ansible.com/ansible/latest/",
'hubspot_id': '330046',
'satellite_tracking': True,
'show_extranav': True,
'swift_id': 'yABGvz2N8PwcwBxyfzUc',
'tag_manager_id': 'GTM-PSB293',
'vcs_pageview_mode': 'edit'
}
html_context = {
'display_github': 'True',
'show_sphinx': False,
'is_eol': False,
'github_user': 'ansible',
'github_repo': 'ansible',
'github_version': 'devel/docs/docsite/rst/',
'github_module_version': 'devel/lib/ansible/modules/',
'github_root_dir': 'devel/lib/ansible',
'github_cli_version': 'devel/lib/ansible/cli/',
'current_version': version,
'latest_version': '2.14',
# list specifically out of order to make latest work
'available_versions': ('2.14', '2.13', '2.12', 'devel',),
}
# Add extra CSS styles to the resulting HTML pages
html_css_files = [
'css/core-color-scheme.css',
]
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
# html_style = 'solar.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Ansible Core Documentation'
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = 'Documentation'
# The name of an image file (within the static path) to place at the top of
# the sidebar.
# html_logo =
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = 'favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['../_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
html_copy_source = False
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = 'https://docs.ansible.com/ansible/latest'
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'Poseidodoc'
# Configuration for sphinx-notfound-pages
# with no 'notfound_template' and no 'notfound_context' set,
# the extension builds 404.rst into a location-agnostic 404 page
#
# default is `en` - using this for the sub-site:
notfound_default_language = "ansible"
# default is `latest`:
# setting explicitly - docsite serves up /ansible/latest/404.html
# so keep this set to `latest` even on the `devel` branch
# then no maintenance is needed when we branch a new stable_x.x
notfound_default_version = "latest"
# makes default setting explicit:
notfound_no_urls_prefix = False
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class
# [howto/manual]).
latex_documents = [
('index', 'ansible.tex', 'Ansible 2.2 Documentation', AUTHOR, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True
autoclass_content = 'both'
# Note: Our strategy for intersphinx mappings is to have the upstream build location as the
# canonical source and then cached copies of the mapping stored locally in case someone is building
# when disconnected from the internet. We then have a script to update the cached copies.
#
# Because of that, each entry in this mapping should have this format:
# name: ('http://UPSTREAM_URL', (None, 'path/to/local/cache.inv'))
#
# The update script depends on this format so deviating from this (for instance, adding a third
# location for the mappning to live) will confuse it.
intersphinx_mapping = {'python': ('https://docs.python.org/2/', (None, '../python2.inv')),
'python3': ('https://docs.python.org/3/', (None, '../python3.inv')),
'jinja2': ('http://jinja.palletsprojects.com/', (None, '../jinja2.inv')),
'ansible_7': ('https://docs.ansible.com/ansible/7/', (None, '../ansible_7.inv')),
'ansible_6': ('https://docs.ansible.com/ansible/6/', (None, '../ansible_6.inv')),
'ansible_2_9': ('https://docs.ansible.com/ansible/2.9/', (None, '../ansible_2_9.inv')),
}
# linckchecker settings
linkcheck_ignore = [
]
linkcheck_workers = 25
# linkcheck_anchors = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,882 |
Move ansible-collections requirements to docs.ansible.com/ansible
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The community maintains a list of requirements for collections at https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst. Incorporate this content into the as part of the dev_guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
collections
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/71882
|
https://github.com/ansible/ansible/pull/80234
|
a2dc5fcc7da366e9d2c541863a7de2b0424ea773
|
cba395243454b0a959edea20425618fe7b9be775
| 2020-09-23T15:26:54Z |
python
| 2023-03-21T20:59:26Z |
docs/docsite/sphinx_conf/core_lang_conf.py
|
# -*- coding: utf-8 -*-
#
# documentation build configuration file, created by
# sphinx-quickstart on Sat Sep 27 13:23:22 2008-2009.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed
# automatically).
#
# All configuration values have a default value; values that are commented out
# serve to show the default value.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import sys
import os
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
# sys.path.append(os.path.abspath('some/directory'))
#
sys.path.insert(0, os.path.join('ansible', 'lib'))
# We want sphinx to document the ansible modules contained in this repository,
# not those that may happen to be installed in the version
# of Python used to run sphinx. When sphinx loads in order to document,
# the repository version needs to be the one that is loaded:
sys.path.insert(0, os.path.abspath(os.path.join('..', '..', '..', 'lib')))
VERSION = 'devel'
AUTHOR = 'Ansible, Inc'
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings.
# They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# TEST: 'sphinxcontrib.fulltoc'
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'notfound.extension',
'sphinx_antsibull_ext', # provides CSS for the plugin/module docs generated by antsibull
]
# Later on, add 'sphinx.ext.viewcode' to the list if you want to have
# colorized code generated too for references.
# Add any paths that contain templates here, relative to this directory.
templates_path = ['../.templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
root_doc = master_doc = 'index' # Sphinx 4+ / 3-
# General substitutions.
project = 'Ansible'
copyright = "Ansible project contributors"
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
# The short X.Y version.
version = VERSION
# The full version, including alpha/beta/rc tags.
release = VERSION
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
# unused_docs = []
# List of directories, relative to source directories, that shouldn't be
# searched for source files.
# exclude_dirs = []
# A list of glob-style patterns that should be excluded when looking
# for source files.
exclude_patterns = [
'2.10_index.rst',
'ansible_index.rst',
'core_index.rst',
'network',
'scenario_guides',
'community/collection_contributors/test_index.rst',
'community/collection_contributors/collection_integration_about.rst',
'community/collection_contributors/collection_integration_updating.rst',
'community/collection_contributors/collection_integration_add.rst',
'community/collection_contributors/collection_test_pr_locally.rst',
'community/collection_contributors/collection_integration_tests.rst',
'community/collection_contributors/collection_integration_running.rst',
'community/collection_contributors/collection_reviewing.rst',
'community/collection_contributors/collection_unit_tests.rst',
'community/maintainers.rst',
'community/contributions_collections.rst',
'community/create_pr_quick_start.rst',
'community/reporting_collections.rst',
'community/contributing_maintained_collections.rst',
'community/collection_development_process.rst',
'community/collection_contributors/collection_release_without_branches.rst',
'community/collection_contributors/collection_release_with_branches.rst',
'community/collection_contributors/collection_releasing.rst',
'community/maintainers_guidelines.rst',
'community/maintainers_workflow.rst',
'community/steering/community_steering_committee.rst',
'community/steering/steering_committee_membership.rst',
'community/steering/steering_committee_past_members.rst',
'community/steering/steering_index.rst',
'dev_guide/ansible_index.rst',
'dev_guide/core_index.rst',
'dev_guide/platforms/aws_guidelines.rst',
'dev_guide/platforms/openstack_guidelines.rst',
'dev_guide/platforms/ovirt_dev_guide.rst',
'dev_guide/platforms/vmware_guidelines.rst',
'dev_guide/platforms/vmware_rest_guidelines.rst',
'porting_guides/porting_guides.rst',
'porting_guides/porting_guide_[1-9]*',
'roadmap/index.rst',
'roadmap/ansible_roadmap_index.rst',
'roadmap/old_roadmap_index.rst',
'roadmap/ROADMAP_2_5.rst',
'roadmap/ROADMAP_2_6.rst',
'roadmap/ROADMAP_2_7.rst',
'roadmap/ROADMAP_2_8.rst',
'roadmap/ROADMAP_2_9.rst',
'roadmap/COLLECTIONS*'
]
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'ansible'
highlight_language = 'YAML+Jinja'
# Substitutions, variables, entities, & shortcuts for text which do not need to link to anything.
# For titles which should be a link, use the intersphinx anchors set at the index, chapter, and section levels, such as qi_start_:
# |br| is useful for formatting fields inside of tables
# |_| is a nonbreaking space; similarly useful inside of tables
rst_epilog = """
.. |br| raw:: html
<br>
.. |_| unicode:: 0xA0
:trim:
"""
# Options for HTML output
# -----------------------
html_theme_path = []
html_theme = 'sphinx_ansible_theme'
html_show_sphinx = False
html_theme_options = {
'canonical_url': "https://docs.ansible.com/ansible/latest/",
'hubspot_id': '330046',
'satellite_tracking': True,
'show_extranav': True,
'swift_id': 'yABGvz2N8PwcwBxyfzUc',
'tag_manager_id': 'GTM-PSB293',
'vcs_pageview_mode': 'edit'
}
html_context = {
'display_github': 'True',
'show_sphinx': False,
'is_eol': False,
'github_user': 'ansible',
'github_repo': 'ansible',
'github_version': 'devel/docs/docsite/rst/',
'github_module_version': 'devel/lib/ansible/modules/',
'github_root_dir': 'devel/lib/ansible',
'github_cli_version': 'devel/lib/ansible/cli/',
'current_version': version,
'latest_version': '2.14',
# list specifically out of order to make latest work
'available_versions': ('2.14_ja', '2.13_ja', '2.12_ja',),
}
# Add extra CSS styles to the resulting HTML pages
html_css_files = [
'css/core-color-scheme.css',
]
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
# html_style = 'solar.css'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Ansible Core Documentation'
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = 'Documentation'
# The name of an image file (within the static path) to place at the top of
# the sidebar.
# html_logo =
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = 'favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['../_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
html_copy_source = False
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = 'https://docs.ansible.com/ansible/latest'
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'Poseidodoc'
# Configuration for sphinx-notfound-pages
# with no 'notfound_template' and no 'notfound_context' set,
# the extension builds 404.rst into a location-agnostic 404 page
#
# default is `en` - using this for the sub-site:
notfound_default_language = "ansible"
# default is `latest`:
# setting explicitly - docsite serves up /ansible/latest/404.html
# so keep this set to `latest` even on the `devel` branch
# then no maintenance is needed when we branch a new stable_x.x
notfound_default_version = "latest"
# makes default setting explicit:
notfound_no_urls_prefix = False
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class
# [howto/manual]).
latex_documents = [
('index', 'ansible.tex', 'Ansible Documentation', AUTHOR, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True
autoclass_content = 'both'
# Note: Our strategy for intersphinx mappings is to have the upstream build location as the
# canonical source and then cached copies of the mapping stored locally in case someone is building
# when disconnected from the internet. We then have a script to update the cached copies.
#
# Because of that, each entry in this mapping should have this format:
# name: ('http://UPSTREAM_URL', (None, 'path/to/local/cache.inv'))
#
# The update script depends on this format so deviating from this (for instance, adding a third
# location for the mappning to live) will confuse it.
intersphinx_mapping = {'python': ('https://docs.python.org/2/', (None, '../python2.inv')),
'python3': ('https://docs.python.org/3/', (None, '../python3.inv')),
'jinja2': ('http://jinja.palletsprojects.com/', (None, '../jinja2.inv')),
'ansible_7': ('https://docs.ansible.com/ansible/7/', (None, '../ansible_7.inv')),
'ansible_6': ('https://docs.ansible.com/ansible/6/', (None, '../ansible_6.inv')),
'ansible_2_9': ('https://docs.ansible.com/ansible/2.9/', (None, '../ansible_2_9.inv')),
}
# linckchecker settings
linkcheck_ignore = [
]
linkcheck_workers = 25
# linkcheck_anchors = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,174 |
Use of `retry_with_delays_and_condition` and `generate_jittered_backoff` may lead to no retries
|
### Summary
The way in which we are using `retry_with_delays_and_condition` along with `generate_jittered_backoff` may prevent subsequent failures from retrying if the generator was consumed by previous calls to `_call_galaxy`.
It appears as though the `backoff_iterator` in this case is global for all calls, and not refreshed per call to `_call_galaxy`:
https://github.com/ansible/ansible/blob/c564c6e21e4538b475df2ae4b3f66b73decff160/lib/ansible/galaxy/api.py#L328-L332
Currently every call to `_call_galaxy` consumes at least 1 item in the `backoff_iterator`, even when a retry isn't attempted, so after 6 calls, no-retries would ever be performed.
This may require making `backoff_iterator` take a callable, or something that can regenerate the iterator, instead of acting as a global state iterator.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80174
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T20:33:11Z |
python
| 2023-03-22T16:04:56Z |
changelogs/fragments/galaxy-improve-retries.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,174 |
Use of `retry_with_delays_and_condition` and `generate_jittered_backoff` may lead to no retries
|
### Summary
The way in which we are using `retry_with_delays_and_condition` along with `generate_jittered_backoff` may prevent subsequent failures from retrying if the generator was consumed by previous calls to `_call_galaxy`.
It appears as though the `backoff_iterator` in this case is global for all calls, and not refreshed per call to `_call_galaxy`:
https://github.com/ansible/ansible/blob/c564c6e21e4538b475df2ae4b3f66b73decff160/lib/ansible/galaxy/api.py#L328-L332
Currently every call to `_call_galaxy` consumes at least 1 item in the `backoff_iterator`, even when a retry isn't attempted, so after 6 calls, no-retries would ever be performed.
This may require making `backoff_iterator` take a callable, or something that can regenerate the iterator, instead of acting as a global state iterator.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80174
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T20:33:11Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import pathlib
import re
import shutil
import sys
import textwrap
import time
import typing as t
from dataclasses import dataclass
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI, GalaxyError
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
# config definition by position: name, required, type
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
('timeout', False, 'int'),
]
# config definition fields
SERVER_ADDITIONAL = {
'v3': {'default': 'False'},
'validate_certs': {'cli': [{'name': 'validate_certs'}]},
'timeout': {'default': '60', 'cli': [{'name': 'timeout'}]},
'token': {'default': None},
}
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
# FIXME: use validate_certs context from Galaxy servers when downloading collections
# .get used here for when this is used in a non-CLI context
artifacts_manager_kwargs = {'validate_certs': context.CLIARGS.get('resolved_validate_certs', True)}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set or [''], key=len))
version_length = len(max(version_set or [''], key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError(f"{value} is not a valid signature count value")
return value
@dataclass
class RoleDistributionServer:
_api: t.Union[GalaxyAPI, None]
api_servers: list[GalaxyAPI]
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
class GalaxyCLI(CLI):
'''Command to manage Ansible roles and collections.
None of the CLI tools are designed to run concurrently with themselves.
Use an external scheduler and/or locking to ensure there are no clashing operations.
'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self.lazy_role_api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None)
common.add_argument('--timeout', dest='timeout', type=int,
help="The time to wait for operations against the galaxy server, defaults to 60s.")
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Install collection artifacts (tarballs) without contacting any distribution servers. '
'This does not apply to collections in remote Git repositories or URLs to remote tarballs.'
)
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
r_re = re.compile(r'^(?<!-)-[a-zA-Z]*r[a-zA-Z]*') # -r, -fr
contains_r = bool([a for a in self._raw_args if r_re.match(a)])
role_file_re = re.compile(r'--role-file($|=)') # --role-file foo, --role-file=foo
contains_role_file = bool([a for a in self._raw_args if role_file_re.match(a)])
if self._implicit_role and (contains_r or contains_role_file):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
# ensure we have 'usable' cli option
setattr(options, 'validate_certs', (None if options.ignore_certs is None else not options.ignore_certs))
# the default if validate_certs is None
setattr(options, 'resolved_validate_certs', (options.validate_certs if options.validate_certs is not None else not C.GALAXY_IGNORE_CERTS))
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
config_def = {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
if key in SERVER_ADDITIONAL:
config_def.update(SERVER_ADDITIONAL[key])
return config_def
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache', 'timeout']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Abuse the 'plugin config' by making 'galaxy_server' a type of plugin
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
# resolve the config created options above with existing config and user options
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
auth_url = server_options.pop('auth_url')
client_id = server_options.pop('client_id')
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
v3 = server_options.pop('v3')
if server_options['validate_certs'] is None:
server_options['validate_certs'] = context.CLIARGS['resolved_validate_certs']
validate_certs = server_options['validate_certs']
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username, server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
validate_certs = context.CLIARGS['resolved_validate_certs']
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs,
**galaxy_options
))
# checks api versions once a GalaxyRole makes an api call
# self.api can be used to evaluate the best server immediately
self.lazy_role_api = RoleDistributionServer(None, self.api_servers)
return context.CLIARGS['func']()
@property
def api(self):
return self.lazy_role_api.api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=context.CLIARGS['resolved_validate_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
for root, dirs, files in os.walk(b_obj_path, topdown=True):
for old_dir in dirs:
path = os.path.join(root, old_dir)
shutil.rmtree(path)
for old_file in files:
path = os.path.join(root, old_file)
os.unlink(path)
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path), follow_symlinks=False)
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if os.path.exists(b_dir_path):
continue
b_src_dir = to_bytes(os.path.join(root, d), errors='surrogate_or_strict')
if os.path.islink(b_src_dir):
shutil.copyfile(b_src_dir, b_dir_path, follow_symlinks=False)
else:
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except GalaxyError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = AnsibleCollectionConfig.collection_paths
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
try:
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
except KeyError:
if self._implicit_role:
raise AnsibleError(
'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" '
'instead of "ansible-galaxy install".'
)
raise
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
offline=context.CLIARGS.get('offline', False),
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
# NOTE: the meta file is also required for installing the role, not just dependencies
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata_dependencies + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
if artifacts_manager is not None:
artifacts_manager.require_build_metadata = False
output_format = context.CLIARGS['output_format']
collection_name = context.CLIARGS['collection']
default_collections_path = set(C.COLLECTIONS_PATHS)
collections_search_paths = (
set(context.CLIARGS['collections_path'] or []) | default_collections_path | set(AnsibleCollectionConfig.collection_paths)
)
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
namespace_filter = None
collection_filter = None
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace_filter, collection_filter = collection_name.split('.')
collections = list(find_existing_collections(
list(collections_search_paths),
artifacts_manager,
namespace_filter=namespace_filter,
collection_filter=collection_filter,
dedupe=False
))
seen = set()
fqcn_width, version_width = _get_collection_widths(collections)
for collection in sorted(collections, key=lambda c: c.src):
collection_found = True
collection_path = pathlib.Path(to_text(collection.src)).parent.parent.as_posix()
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
else:
if collection_path not in seen:
_display_header(
collection_path,
'Collection',
'Version',
fqcn_width,
version_width
)
seen.add(collection_path)
_display_collection(collection, fqcn_width, version_width)
path_found = False
for path in collections_search_paths:
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(path))
elif os.path.exists(path) and not os.path.isdir(path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(path))
else:
path_found = True
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not collections and not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.warning("No roles match your search.")
return 0
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return 0
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,174 |
Use of `retry_with_delays_and_condition` and `generate_jittered_backoff` may lead to no retries
|
### Summary
The way in which we are using `retry_with_delays_and_condition` along with `generate_jittered_backoff` may prevent subsequent failures from retrying if the generator was consumed by previous calls to `_call_galaxy`.
It appears as though the `backoff_iterator` in this case is global for all calls, and not refreshed per call to `_call_galaxy`:
https://github.com/ansible/ansible/blob/c564c6e21e4538b475df2ae4b3f66b73decff160/lib/ansible/galaxy/api.py#L328-L332
Currently every call to `_call_galaxy` consumes at least 1 item in the `backoff_iterator`, even when a retry isn't attempted, so after 6 calls, no-retries would ever be performed.
This may require making `backoff_iterator` take a callable, or something that can regenerate the iterator, instead of acting as a global state iterator.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80174
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T20:33:11Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/galaxy/api.py
|
# (C) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import datetime
import functools
import hashlib
import json
import os
import stat
import tarfile
import time
import threading
from urllib.error import HTTPError
from urllib.parse import quote as urlquote, urlencode, urlparse, parse_qs, urljoin
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.api import retry_with_delays_and_condition
from ansible.module_utils.api import generate_jittered_backoff
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url, prepare_multipart
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
from ansible.utils.path import makedirs_safe
display = Display()
_CACHE_LOCK = threading.Lock()
COLLECTION_PAGE_SIZE = 100
RETRY_HTTP_ERROR_CODES = [ # TODO: Allow user-configuration
429, # Too Many Requests
520, # Galaxy rate limit error code (Cloudflare unknown error)
]
def cache_lock(func):
def wrapped(*args, **kwargs):
with _CACHE_LOCK:
return func(*args, **kwargs)
return wrapped
def is_rate_limit_exception(exception):
# Note: cloud.redhat.com masks rate limit errors with 403 (Forbidden) error codes.
# Since 403 could reflect the actual problem (such as an expired token), we should
# not retry by default.
return isinstance(exception, GalaxyError) and exception.http_code in RETRY_HTTP_ERROR_CODES
def g_connect(versions):
"""
Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the
endpoint.
:param versions: A list of API versions that the function supports.
"""
def decorator(method):
def wrapped(self, *args, **kwargs):
if not self._available_api_versions:
display.vvvv("Initial connection to galaxy_server: %s" % self.api_server)
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
n_url = self.api_server
error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url)
if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/':
n_url = 'https://galaxy.ansible.com/api/'
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except (AnsibleError, GalaxyError, ValueError, KeyError) as err:
# Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
if n_url.endswith('/api') or n_url.endswith('/api/'):
raise
# Let exceptions here bubble up but raise the original if this returns a 404 (/api/ wasn't found).
n_url = _urljoin(n_url, '/api/')
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except GalaxyError as new_err:
if new_err.http_code == 404:
raise err
raise
if 'available_versions' not in data:
raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available "
"on %s" % (n_url, self.api_server))
# Update api_server to point to the "real" API root, which in this case could have been the configured
# url + '/api/' appended.
self.api_server = n_url
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
available_versions = data.get('available_versions', {u'v1': u'v1/'})
if list(available_versions.keys()) == [u'v1']:
available_versions[u'v2'] = u'v2/'
self._available_api_versions = available_versions
display.vvvv("Found API version '%s' with Galaxy server %s (%s)"
% (', '.join(available_versions.keys()), self.name, self.api_server))
# Verify that the API versions the function works with are available on the server specified.
available_versions = set(self._available_api_versions.keys())
common_versions = set(versions).intersection(available_versions)
if not common_versions:
raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s"
% (method.__name__, ", ".join(versions), ", ".join(available_versions),
self.name, self.api_server))
return method(self, *args, **kwargs)
return wrapped
return decorator
def get_cache_id(url):
""" Gets the cache ID for the URL specified. """
url_info = urlparse(url)
port = None
try:
port = url_info.port
except ValueError:
pass # While the URL is probably invalid, let the caller figure that out when using it
# Cannot use netloc because it could contain credentials if the server specified had them in there.
return '%s:%s' % (url_info.hostname, port or '')
@cache_lock
def _load_cache(b_cache_path):
""" Loads the cache file requested if possible. The file must not be world writable. """
cache_version = 1
if not os.path.isfile(b_cache_path):
display.vvvv("Creating Galaxy API response cache file at '%s'" % to_text(b_cache_path))
with open(b_cache_path, 'w'):
os.chmod(b_cache_path, 0o600)
cache_mode = os.stat(b_cache_path).st_mode
if cache_mode & stat.S_IWOTH:
display.warning("Galaxy cache has world writable access (%s), ignoring it as a cache source."
% to_text(b_cache_path))
return
with open(b_cache_path, mode='rb') as fd:
json_val = to_text(fd.read(), errors='surrogate_or_strict')
try:
cache = json.loads(json_val)
except ValueError:
cache = None
if not isinstance(cache, dict) or cache.get('version', None) != cache_version:
display.vvvv("Galaxy cache file at '%s' has an invalid version, clearing" % to_text(b_cache_path))
cache = {'version': cache_version}
# Set the cache after we've cleared the existing entries
with open(b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(cache), errors='surrogate_or_strict'))
return cache
def _urljoin(*args):
return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a)
class GalaxyError(AnsibleError):
""" Error for bad Galaxy server responses. """
def __init__(self, http_error, message):
super(GalaxyError, self).__init__(message)
self.http_code = http_error.code
self.url = http_error.geturl()
try:
http_msg = to_text(http_error.read())
err_info = json.loads(http_msg)
except (AttributeError, ValueError):
err_info = {}
url_split = self.url.split('/')
if 'v2' in url_split:
galaxy_msg = err_info.get('message', http_error.reason)
code = err_info.get('code', 'Unknown')
full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code)
elif 'v3' in url_split:
errors = err_info.get('errors', [])
if not errors:
errors = [{}] # Defaults are set below, we just need to make sure 1 error is present.
message_lines = []
for error in errors:
error_msg = error.get('detail') or error.get('title') or http_error.reason
error_code = error.get('code') or 'Unknown'
message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code)
message_lines.append(message_line)
full_error_msg = "%s %s" % (message, ', '.join(message_lines))
else:
# v1 and unknown API endpoints
galaxy_msg = err_info.get('default', http_error.reason)
full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg)
self.message = to_native(full_error_msg)
# Keep the raw string results for the date. It's too complex to parse as a datetime object and the various APIs return
# them in different formats.
CollectionMetadata = collections.namedtuple('CollectionMetadata', ['namespace', 'name', 'created_str', 'modified_str'])
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies, signatures_url, signatures):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
:param namespace: The namespace name.
:param name: The collection name.
:param version: The version that the metadata refers to.
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
:param signatures_url: The URL to the specific version of the collection.
:param signatures: The list of signatures found at the signatures_url.
"""
self.namespace = namespace
self.name = name
self.version = version
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
self.signatures_url = signatures_url
self.signatures = signatures
@functools.total_ordering
class GalaxyAPI:
""" This class is meant to be used as a API client for an Ansible Galaxy server """
def __init__(
self, galaxy, name, url,
username=None, password=None, token=None, validate_certs=True,
available_api_versions=None,
clear_response_cache=False, no_cache=True,
priority=float('inf'),
timeout=60,
):
self.galaxy = galaxy
self.name = name
self.username = username
self.password = password
self.token = token
self.api_server = url
self.validate_certs = validate_certs
self.timeout = timeout
self._available_api_versions = available_api_versions or {}
self._priority = priority
self._server_timeout = timeout
b_cache_dir = to_bytes(C.GALAXY_CACHE_DIR, errors='surrogate_or_strict')
makedirs_safe(b_cache_dir, mode=0o700)
self._b_cache_path = os.path.join(b_cache_dir, b'api.json')
if clear_response_cache:
with _CACHE_LOCK:
if os.path.exists(self._b_cache_path):
display.vvvv("Clearing cache file (%s)" % to_text(self._b_cache_path))
os.remove(self._b_cache_path)
self._cache = None
if not no_cache:
self._cache = _load_cache(self._b_cache_path)
display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs))
def __str__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as a native string representation."""
return to_native(self.name)
def __unicode__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as a unicode/text string representation."""
return to_text(self.name)
def __repr__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as an inspectable string representation."""
return (
'<{instance!s} "{name!s}" @ {url!s} with priority {priority!s}>'.
format(
instance=self, name=self.name,
priority=self._priority, url=self.api_server,
)
)
def __lt__(self, other_galaxy_api):
# type: (GalaxyAPI, GalaxyAPI) -> bool
"""Return whether the instance priority is higher than other."""
if not isinstance(other_galaxy_api, self.__class__):
return NotImplemented
return (
self._priority > other_galaxy_api._priority or
self.name < self.name
)
@property # type: ignore[misc] # https://github.com/python/mypy/issues/1362
@g_connect(['v1', 'v2', 'v3'])
def available_api_versions(self):
# Calling g_connect will populate self._available_api_versions
return self._available_api_versions
@retry_with_delays_and_condition(
backoff_iterator=generate_jittered_backoff(retries=6, delay_base=2, delay_threshold=40),
should_retry_error=is_rate_limit_exception
)
def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None,
cache=False, cache_key=None):
url_info = urlparse(url)
cache_id = get_cache_id(url)
if not cache_key:
cache_key = url_info.path
query = parse_qs(url_info.query)
if cache and self._cache:
server_cache = self._cache.setdefault(cache_id, {})
iso_datetime_format = '%Y-%m-%dT%H:%M:%SZ'
valid = False
if cache_key in server_cache:
expires = datetime.datetime.strptime(server_cache[cache_key]['expires'], iso_datetime_format)
valid = datetime.datetime.utcnow() < expires
is_paginated_url = 'page' in query or 'offset' in query
if valid and not is_paginated_url:
# Got a hit on the cache and we aren't getting a paginated response
path_cache = server_cache[cache_key]
if path_cache.get('paginated'):
if '/v3/' in cache_key:
res = {'links': {'next': None}}
else:
res = {'next': None}
# Technically some v3 paginated APIs return in 'data' but the caller checks the keys for this so
# always returning the cache under results is fine.
res['results'] = []
for result in path_cache['results']:
res['results'].append(result)
else:
res = path_cache['results']
return res
elif not is_paginated_url:
# The cache entry had expired or does not exist, start a new blank entry to be filled later.
expires = datetime.datetime.utcnow()
expires += datetime.timedelta(days=1)
server_cache[cache_key] = {
'expires': expires.strftime(iso_datetime_format),
'paginated': False,
}
headers = headers or {}
self._add_auth_token(headers, url, required=auth_required)
try:
display.vvvv("Calling Galaxy at %s" % url)
resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers,
method=method, timeout=self._server_timeout, http_agent=user_agent(), follow_redirects='safe')
except HTTPError as e:
raise GalaxyError(e, error_context_msg)
except Exception as e:
raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e)))
resp_data = to_text(resp.read(), errors='surrogate_or_strict')
try:
data = json.loads(resp_data)
except ValueError:
raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s"
% (resp.url, to_native(resp_data)))
if cache and self._cache:
path_cache = self._cache[cache_id][cache_key]
# v3 can return data or results for paginated results. Scan the result so we can determine what to cache.
paginated_key = None
for key in ['data', 'results']:
if key in data:
paginated_key = key
break
if paginated_key:
path_cache['paginated'] = True
results = path_cache.setdefault('results', [])
for result in data[paginated_key]:
results.append(result)
else:
path_cache['results'] = data
return data
def _add_auth_token(self, headers, url, token_type=None, required=False):
# Don't add the auth token if one is already present
if 'Authorization' in headers:
return
if not self.token and required:
raise AnsibleError("No access token or username set. A token can be set with --api-key "
"or at {0}.".format(to_native(C.GALAXY_TOKEN_PATH)))
if self.token:
headers.update(self.token.headers())
@cache_lock
def _set_cache(self):
with open(self._b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(self._cache), errors='surrogate_or_strict'))
@g_connect(['v1'])
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/'
args = urlencode({"github_token": github_token})
try:
resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST", http_agent=user_agent(), timeout=self._server_timeout)
except HTTPError as e:
raise GalaxyError(e, 'Attempting to authenticate to galaxy')
except Exception as e:
raise AnsibleError('Unable to authenticate to galaxy: %s' % to_native(e), orig_exc=e)
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
return data
@g_connect(['v1'])
def create_import_task(self, github_user, github_repo, reference=None, role_name=None):
"""
Post an import request
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/'
args = {
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
}
if role_name:
args['alternate_role_name'] = role_name
elif github_repo.startswith('ansible-role'):
args['alternate_role_name'] = github_repo[len('ansible-role') + 1:]
data = self._call_galaxy(url, args=urlencode(args), method="POST")
if data.get('results', None):
return data['results']
return data
@g_connect(['v1'])
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports")
if task_id is not None:
url = "%s?id=%d" % (url, task_id)
elif github_user is not None and github_repo is not None:
url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self._call_galaxy(url)
return data['results']
@g_connect(['v1'])
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name.
"""
role_name = to_text(urlquote(to_bytes(role_name)))
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except Exception:
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles",
"?owner__username=%s&name=%s" % (user_name, role_name))
data = self._call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
@g_connect(['v1'])
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
results = []
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related,
"?page_size=50")
data = self._call_galaxy(url)
results = data['results']
done = (data.get('next_link', None) is None)
# https://github.com/ansible/ansible/issues/64355
# api_server contains part of the API path but next_link includes the /api part so strip it out.
url_info = urlparse(self.api_server)
base_url = "%s://%s/" % (url_info.scheme, url_info.netloc)
while not done:
url = _urljoin(base_url, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
except Exception as e:
display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s"
% (role_id, related, to_text(e)))
return results
@g_connect(['v1'])
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size")
data = self._call_galaxy(url)
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error)))
@g_connect(['v1'])
def search_roles(self, search, **kwargs):
search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?")
if search:
search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search)))
tags = kwargs.get('tags', None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, string_types):
tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self._call_galaxy(search_url)
return data
@g_connect(['v1'])
def add_secret(self, source, github_user, github_repo, secret):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/'
args = urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self._call_galaxy(url, args=args, method="POST")
return data
@g_connect(['v1'])
def list_secrets(self):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets")
data = self._call_galaxy(url, auth_required=True)
return data
@g_connect(['v1'])
def remove_secret(self, secret_id):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/'
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
@g_connect(['v1'])
def delete_role(self, github_user, github_repo):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole",
"?github_user=%s&github_repo=%s" % (github_user, github_repo))
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
# Collection APIs #
@g_connect(['v2', 'v3'])
def publish_collection(self, collection_path):
"""
Publishes a collection to a Galaxy server and returns the import task URI.
:param collection_path: The path to the collection tarball to publish.
:return: The import task URI that contains the import results.
"""
display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server))
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path))
elif not tarfile.is_tarfile(b_collection_path):
raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection "
"build' to create a proper release artifact." % to_native(collection_path))
with open(b_collection_path, 'rb') as collection_tar:
sha256 = secure_hash_s(collection_tar.read(), hash_func=hashlib.sha256)
content_type, b_form_data = prepare_multipart(
{
'sha256': sha256,
'file': {
'filename': b_collection_path,
'mime_type': 'application/octet-stream',
},
}
)
headers = {
'Content-type': content_type,
'Content-length': len(b_form_data),
}
if 'v3' in self.available_api_versions:
n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/'
else:
n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/'
resp = self._call_galaxy(n_url, args=b_form_data, headers=headers, method='POST', auth_required=True,
error_context_msg='Error when publishing collection to %s (%s)'
% (self.name, self.api_server))
return resp['task']
@g_connect(['v2', 'v3'])
def wait_import_task(self, task_id, timeout=0):
"""
Waits until the import process on the Galaxy server has completed or the timeout is reached.
:param task_id: The id of the import task to wait for. This can be parsed out of the return
value for GalaxyAPI.publish_collection.
:param timeout: The timeout in seconds, 0 is no timeout.
"""
state = 'waiting'
data = None
# Construct the appropriate URL per version
if 'v3' in self.available_api_versions:
full_url = _urljoin(self.api_server, self.available_api_versions['v3'],
'imports/collections', task_id, '/')
else:
full_url = _urljoin(self.api_server, self.available_api_versions['v2'],
'collection-imports', task_id, '/')
display.display("Waiting until Galaxy import task %s has completed" % full_url)
start = time.time()
wait = 2
while timeout == 0 or (time.time() - start) < timeout:
try:
data = self._call_galaxy(full_url, method='GET', auth_required=True,
error_context_msg='Error when getting import task results at %s' % full_url)
except GalaxyError as e:
if e.http_code != 404:
raise
# The import job may not have started, and as such, the task url may not yet exist
display.vvv('Galaxy import process has not started, wait %s seconds before trying again' % wait)
time.sleep(wait)
continue
state = data.get('state', 'waiting')
if data.get('finished_at', None):
break
display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again'
% (state, wait))
time.sleep(wait)
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
wait = min(30, wait * 1.5)
if state == 'waiting':
raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'"
% to_native(full_url))
for message in data.get('messages', []):
level = message['level']
if level.lower() == 'error':
display.error("Galaxy import error message: %s" % message['message'])
elif level.lower() == 'warning':
display.warning("Galaxy import warning message: %s" % message['message'])
else:
display.vvv("Galaxy import message: %s - %s" % (level, message['message']))
if state == 'failed':
code = to_native(data['error'].get('code', 'UNKNOWN'))
description = to_native(
data['error'].get('description', "Unknown error, see %s for more details" % full_url))
raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code))
@g_connect(['v2', 'v3'])
def get_collection_metadata(self, namespace, name):
"""
Gets the collection information from the Galaxy server about a specific Collection.
:param namespace: The collection namespace.
:param name: The collection name.
return: CollectionMetadata about the collection.
"""
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
field_map = [
('created_str', 'created_at'),
('modified_str', 'updated_at'),
]
else:
api_path = self.available_api_versions['v2']
field_map = [
('created_str', 'created'),
('modified_str', 'modified'),
]
info_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, '/')
error_context_msg = 'Error when getting the collection info for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
data = self._call_galaxy(info_url, error_context_msg=error_context_msg)
metadata = {}
for name, api_field in field_map:
metadata[name] = data.get(api_field, None)
return CollectionMetadata(namespace, name, **metadata)
@g_connect(['v2', 'v3'])
def get_collection_version_metadata(self, namespace, name, version):
"""
Gets the collection information from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: CollectionVersionMetadata about the collection at the version requested.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
signatures = data.get('signatures') or []
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'], data['href'], signatures)
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
"""
Gets a list of available versions for a collection on a Galaxy server.
:param namespace: The collection namespace.
:param name: The collection name.
:return: A list of versions that are available.
"""
relative_link = False
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
pagination_path = ['links', 'next']
relative_link = True # AH pagination results are relative an not an absolute URI.
else:
api_path = self.available_api_versions['v2']
pagination_path = ['next']
page_size_name = 'limit' if 'v3' in self.available_api_versions else 'page_size'
versions_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions', '/?%s=%d' % (page_size_name, COLLECTION_PAGE_SIZE))
versions_url_info = urlparse(versions_url)
cache_key = versions_url_info.path
# We should only rely on the cache if the collection has not changed. This may slow things down but it ensures
# we are not waiting a day before finding any new collections that have been published.
if self._cache:
server_cache = self._cache.setdefault(get_cache_id(versions_url), {})
modified_cache = server_cache.setdefault('modified', {})
try:
modified_date = self.get_collection_metadata(namespace, name).modified_str
except GalaxyError as err:
if err.http_code != 404:
raise
# No collection found, return an empty list to keep things consistent with the various APIs
return []
cached_modified_date = modified_cache.get('%s.%s' % (namespace, name), None)
if cached_modified_date != modified_date:
modified_cache['%s.%s' % (namespace, name)] = modified_date
if versions_url_info.path in server_cache:
del server_cache[cache_key]
self._set_cache()
error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
try:
data = self._call_galaxy(versions_url, error_context_msg=error_context_msg, cache=True, cache_key=cache_key)
except GalaxyError as err:
if err.http_code != 404:
raise
# v3 doesn't raise a 404 so we need to mimick the empty response from APIs that do.
return []
if 'data' in data:
# v3 automation-hub is the only known API that uses `data`
# since v3 pulp_ansible does not, we cannot rely on version
# to indicate which key to use
results_key = 'data'
else:
results_key = 'results'
versions = []
while True:
versions += [v['version'] for v in data[results_key]]
next_link = data
for path in pagination_path:
next_link = next_link.get(path, {})
if not next_link:
break
elif relative_link:
# TODO: This assumes the pagination result is relative to the root server. Will need to be verified
# with someone who knows the AH API.
# Remove the query string from the versions_url to use the next_link's query
versions_url = urljoin(versions_url, urlparse(versions_url).path)
next_link = versions_url.replace(versions_url_info.path, next_link)
data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'),
error_context_msg=error_context_msg, cache=True, cache_key=cache_key)
self._set_cache()
return versions
@g_connect(['v2', 'v3'])
def get_collection_signatures(self, namespace, name, version):
"""
Gets the collection signatures from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: A list of signature strings.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
try:
signatures = data["signatures"]
except KeyError:
# Noisy since this is used by the dep resolver, so require more verbosity than Galaxy calls
display.vvvvvv(f"Server {self.api_server} has not signed {namespace}.{name}:{version}")
return []
else:
return [signature_info["signature"] for signature_info in signatures]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,174 |
Use of `retry_with_delays_and_condition` and `generate_jittered_backoff` may lead to no retries
|
### Summary
The way in which we are using `retry_with_delays_and_condition` along with `generate_jittered_backoff` may prevent subsequent failures from retrying if the generator was consumed by previous calls to `_call_galaxy`.
It appears as though the `backoff_iterator` in this case is global for all calls, and not refreshed per call to `_call_galaxy`:
https://github.com/ansible/ansible/blob/c564c6e21e4538b475df2ae4b3f66b73decff160/lib/ansible/galaxy/api.py#L328-L332
Currently every call to `_call_galaxy` consumes at least 1 item in the `backoff_iterator`, even when a retry isn't attempted, so after 6 calls, no-retries would ever be performed.
This may require making `backoff_iterator` take a callable, or something that can regenerate the iterator, instead of acting as a global state iterator.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80174
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T20:33:11Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/galaxy/collection/concrete_artifact_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Concrete collection candidate management helper module."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import subprocess
import typing as t
from contextlib import contextmanager
from hashlib import sha256
from urllib.error import URLError
from urllib.parse import urldefrag
from shutil import rmtree
from tempfile import mkdtemp
if t.TYPE_CHECKING:
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.token import GalaxyToken
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
import yaml
display = Display()
MANIFEST_FILENAME = 'MANIFEST.json'
class ConcreteArtifactsManager:
"""Manager for on-disk collection artifacts.
It is responsible for:
* downloading remote collections from Galaxy-compatible servers and
direct links to tarballs or SCM repositories
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True, keyring=None, timeout=60, required_signature_count=None, ignore_signature_errors=None):
# type: (bytes, bool, str, int, str, list[str]) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: dict[Candidate | Requirement, bytes]
self._artifact_meta_cache = {} # type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None | t.Type[Sentinel]]]
self._galaxy_collection_cache = {} # type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: dict[Candidate, tuple[str, list[dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: dict[str, str]
self._keyring = keyring # type: str
self.timeout = timeout # type: int
self._required_signature_count = required_signature_count # type: str
self._ignore_signature_errors = ignore_signature_errors # type: list[str]
self._require_build_metadata = True # type: bool
@property
def keyring(self):
return self._keyring
@property
def required_successful_signature_count(self):
return self._required_signature_count
@property
def ignore_signature_errors(self):
if self._ignore_signature_errors is None:
return []
return self._ignore_signature_errors
@property
def require_build_metadata(self):
# type: () -> bool
return self._require_build_metadata
@require_build_metadata.setter
def require_build_metadata(self, value):
# type: (bool) -> None
self._require_build_metadata = value
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> dict[str, t.Union[str, list[dict[str, str]]]]
server = collection.src.api_server
try:
download_url = self._galaxy_collection_cache[collection][0]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a Galaxy-stored collection, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._galaxy_artifact_cache[collection]
except KeyError:
pass
try:
url, sha256_hash, token = self._galaxy_collection_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
display.vvvv(
"Fetching a collection tarball for '{collection!s}' from "
'Ansible Galaxy'.format(collection=collection),
)
try:
b_artifact_path = _download_file(
url,
self._b_working_directory,
expected_hash=sha256_hash,
validate_certs=self._validate_certs,
token=token,
) # type: bytes
except URLError as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
else:
display.vvv(
"Collection '{coll!s}' obtained from "
'server {server!s} {url!s}'.format(
coll=collection, server=collection.src or 'Galaxy',
url=collection.src.api_server if collection.src is not None
else '',
)
)
self._galaxy_artifact_cache[collection] = b_artifact_path
return b_artifact_path
def get_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a concrete collection pointer, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._artifact_cache[collection.src]
except KeyError:
pass
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
if collection.is_url:
display.vvvv(
"Collection requirement '{collection!s}' is a URL "
'to a tar artifact'.format(collection=collection.fqcn),
)
try:
b_artifact_path = _download_file(
collection.src,
self._b_working_directory,
expected_hash=None, # NOTE: URLs don't support checksums
validate_certs=self._validate_certs,
timeout=self.timeout
)
except Exception as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
elif collection.is_scm:
b_artifact_path = _extract_collection_from_git(
collection.src,
collection.ver,
self._b_working_directory,
)
elif collection.is_file or collection.is_dir or collection.is_subdirs:
b_artifact_path = to_bytes(collection.src)
else:
# NOTE: This may happen `if collection.is_online_index_pointer`
raise RuntimeError(
'The artifact is of an unexpected type {art_type!s}'.
format(art_type=collection.type)
)
self._artifact_cache[collection.src] = b_artifact_path
return b_artifact_path
def _get_direct_collection_namespace(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['namespace'] # type: ignore[return-value]
def _get_direct_collection_name(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['name'] # type: ignore[return-value]
def get_direct_collection_fqcn(self, collection):
# type: (Candidate) -> t.Optional[str]
"""Extract FQCN from the given on-disk collection artifact.
If the collection is virtual, ``None`` is returned instead
of a string.
"""
if collection.is_virtual:
# NOTE: should it be something like "<virtual>"?
return None
return '.'.join(( # type: ignore[type-var]
self._get_direct_collection_namespace(collection), # type: ignore[arg-type]
self._get_direct_collection_name(collection),
))
def get_direct_collection_version(self, collection):
# type: (t.Union[Candidate, Requirement]) -> str
"""Extract version from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
def get_direct_collection_dependencies(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, str]
"""Extract deps from the given on-disk collection artifact."""
collection_dependencies = self.get_direct_collection_meta(collection)['dependencies']
if collection_dependencies is None:
collection_dependencies = {}
return collection_dependencies # type: ignore[return-value]
def get_direct_collection_meta(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, t.Union[str, dict[str, str], list[str], None, t.Type[Sentinel]]]
"""Extract meta from the given on-disk collection artifact."""
try: # FIXME: use unique collection identifier as a cache key?
return self._artifact_meta_cache[collection.src]
except KeyError:
b_artifact_path = self.get_artifact_path(collection)
if collection.is_url or collection.is_file:
collection_meta = _get_meta_from_tar(b_artifact_path)
elif collection.is_dir: # should we just build a coll instead?
# FIXME: what if there's subdirs?
try:
collection_meta = _get_meta_from_dir(b_artifact_path, self.require_build_metadata)
except LookupError as lookup_err:
raise AnsibleError(
'Failed to find the collection dir deps: {err!s}'.
format(err=to_native(lookup_err)),
) from lookup_err
elif collection.is_scm:
collection_meta = {
'name': None,
'namespace': None,
'dependencies': {to_native(b_artifact_path): '*'},
'version': '*',
}
elif collection.is_subdirs:
collection_meta = {
'name': None,
'namespace': None,
# NOTE: Dropping b_artifact_path since it's based on src anyway
'dependencies': dict.fromkeys(
map(to_native, collection.namespace_collection_paths),
'*',
),
'version': '*',
}
else:
raise RuntimeError
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
def under_tmpdir(
cls,
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
required_signature_count=None, # type: str
ignore_signature_errors=None, # type: list[str]
require_build_metadata=True, # type: bool
): # type: (...) -> t.Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
This method returns a context manager that allocates and cleans
up a temporary directory for caching the collection artifacts
during the dependency resolution process.
"""
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
temp_path = mkdtemp(
dir=to_bytes(temp_dir_base, errors='surrogate_or_strict'),
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(
b_temp_path,
validate_certs,
keyring=keyring,
required_signature_count=required_signature_count,
ignore_signature_errors=ignore_signature_errors
)
finally:
rmtree(b_temp_path)
def parse_scm(collection, version):
"""Extract name, version, path and subdir out of the SCM pointer."""
if ',' in collection:
collection, version = collection.split(',', 1)
elif version == '*' or not version:
version = 'HEAD'
if collection.startswith('git+'):
path = collection[4:]
else:
path = collection
path, fragment = urldefrag(path)
fragment = fragment.strip(os.path.sep)
if path.endswith(os.path.sep + '.git'):
name = path.split(os.path.sep)[-2]
elif '://' not in path and '@' not in path:
name = path
else:
name = path.split('/')[-1]
if name.endswith('.git'):
name = name[:-4]
return name, version, path, fragment
def _extract_collection_from_git(repo_url, coll_ver, b_path):
name, version, git_url, fragment = parse_scm(repo_url, coll_ver)
b_checkout_path = mkdtemp(
dir=b_path,
prefix=to_bytes(name, errors='surrogate_or_strict'),
) # type: bytes
try:
git_executable = get_bin_path('git')
except ValueError as err:
raise AnsibleError(
"Could not find git executable to extract the collection from the Git repository `{repo_url!s}`.".
format(repo_url=to_native(git_url))
) from err
# Perform a shallow clone if simply cloning HEAD
if version == 'HEAD':
git_clone_cmd = git_executable, 'clone', '--depth=1', git_url, to_text(b_checkout_path)
else:
git_clone_cmd = git_executable, 'clone', git_url, to_text(b_checkout_path)
# FIXME: '--branch', version
try:
subprocess.check_call(git_clone_cmd)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to clone a Git repository from `{repo_url!s}`.'.
format(repo_url=to_native(git_url)),
) from proc_err
git_switch_cmd = git_executable, 'checkout', to_text(version)
try:
subprocess.check_call(git_switch_cmd, cwd=b_checkout_path)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to switch a cloned Git repo `{repo_url!s}` '
'to the requested revision `{commitish!s}`.'.
format(
commitish=to_native(version),
repo_url=to_native(git_url),
),
) from proc_err
return (
os.path.join(b_checkout_path, to_bytes(fragment))
if fragment else b_checkout_path
)
# FIXME: use random subdirs while preserving the file names
def _download_file(url, b_path, expected_hash, validate_certs, token=None, timeout=60):
# type: (str, bytes, t.Optional[str], bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
b_tarball_name = to_bytes(
url.rsplit('/', 1)[1], errors='surrogate_or_strict',
)
b_file_name = b_tarball_name[:-len('.tar.gz')]
b_tarball_dir = mkdtemp(
dir=b_path,
prefix=b'-'.join((b_file_name, b'')),
) # type: bytes
b_file_path = os.path.join(b_tarball_dir, b_tarball_name)
display.display("Downloading %s to %s" % (url, to_text(b_tarball_dir)))
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
resp = open_url(
to_native(url, errors='surrogate_or_strict'),
validate_certs=validate_certs,
headers=None if token is None else token.headers(),
unredirected_headers=['Authorization'], http_agent=user_agent(),
timeout=timeout
)
with open(b_file_path, 'wb') as download_file: # type: t.BinaryIO
actual_hash = _consume_file(resp, write_to=download_file)
if expected_hash:
display.vvvv(
'Validating downloaded file hash {actual_hash!s} with '
'expected hash {expected_hash!s}'.
format(actual_hash=actual_hash, expected_hash=expected_hash)
)
if expected_hash != actual_hash:
raise AnsibleError('Mismatch artifact hash with downloaded file')
return b_file_path
def _consume_file(read_from, write_to=None):
# type: (t.BinaryIO, t.BinaryIO) -> str
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
def _normalize_galaxy_yml_manifest(
galaxy_yml, # type: dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
b_galaxy_yml_path, # type: bytes
require_build_metadata=True, # type: bool
):
# type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml_schema = (
get_collections_galaxy_meta_info()
) # type: list[dict[str, t.Any]] # FIXME: <--
# FIXME: πmaybe precise type: list[dict[str, t.Union[bool, str, list[str]]]]
mandatory_keys = set()
string_keys = set() # type: set[str]
list_keys = set() # type: set[str]
dict_keys = set() # type: set[str]
sentinel_keys = set() # type: set[str]
for info in galaxy_yml_schema:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
'sentinel': sentinel_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(mandatory_keys | string_keys | list_keys | dict_keys | sentinel_keys)
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
msg = (
"The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys)))
)
if require_build_metadata:
raise AnsibleError(msg)
display.warning(msg)
raise ValueError(msg)
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val] # type: ignore[list-item]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
for optional_sentinel in sentinel_keys:
if optional_sentinel not in galaxy_yml:
galaxy_yml[optional_sentinel] = Sentinel
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
if not galaxy_yml.get('version'):
galaxy_yml['version'] = '*'
return galaxy_yml
def _get_meta_from_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
try:
return _get_meta_from_installed_dir(b_path)
except LookupError:
return _get_meta_from_src_dir(b_path, require_build_metadata)
def _get_meta_from_src_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml = os.path.join(b_path, _GALAXY_YAML)
if not os.path.isfile(galaxy_yml):
raise LookupError(
"The collection galaxy.yml path '{path!s}' does not exist.".
format(path=to_native(galaxy_yml))
)
with open(galaxy_yml, 'rb') as manifest_file_obj:
try:
manifest = yaml_load(manifest_file_obj)
except yaml.error.YAMLError as yaml_err:
raise AnsibleError(
"Failed to parse the galaxy.yml at '{path!s}' with "
'the following error:\n{err_txt!s}'.
format(
path=to_native(galaxy_yml),
err_txt=to_native(yaml_err),
),
) from yaml_err
if not isinstance(manifest, dict):
if require_build_metadata:
raise AnsibleError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
# Valid build metadata is not required by ansible-galaxy list. Raise ValueError to fall back to implicit metadata.
display.warning(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
raise ValueError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
return _normalize_galaxy_yml_manifest(manifest, galaxy_yml, require_build_metadata)
def _get_json_from_installed_dir(
b_path, # type: bytes
filename, # type: str
): # type: (...) -> dict
b_json_filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
try:
with open(b_json_filepath, 'rb') as manifest_fd:
b_json_text = manifest_fd.read()
except (IOError, OSError):
raise LookupError(
"The collection {manifest!s} path '{path!s}' does not exist.".
format(
manifest=filename,
path=to_native(b_json_filepath),
)
)
manifest_txt = to_text(b_json_text, errors='surrogate_or_strict')
try:
manifest = json.loads(manifest_txt)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=filename),
)
return manifest
def _get_meta_from_installed_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
manifest = _get_json_from_installed_dir(b_path, MANIFEST_FILENAME)
collection_info = manifest['collection_info']
version = collection_info.get('version')
if not version:
raise AnsibleError(
u'Collection metadata file `{manifest_filename!s}` at `{meta_file!s}` is expected '
u'to have a valid SemVer version value but got {version!s}'.
format(
manifest_filename=MANIFEST_FILENAME,
meta_file=to_text(b_path),
version=to_text(repr(version)),
),
)
return collection_info
def _get_meta_from_tar(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
if not tarfile.is_tarfile(b_path):
raise AnsibleError(
"Collection artifact at '{path!s}' is not a valid tar file.".
format(path=to_native(b_path)),
)
with tarfile.open(b_path, mode='r') as collection_tar: # type: tarfile.TarFile
try:
member = collection_tar.getmember(MANIFEST_FILENAME)
except KeyError:
raise AnsibleError(
"Collection at '{path!s}' does not contain the "
'required file {manifest_file!s}.'.
format(
path=to_native(b_path),
manifest_file=MANIFEST_FILENAME,
),
)
with _tarfile_extract(collection_tar, member) as (_member, member_obj):
if member_obj is None:
raise AnsibleError(
'Collection tar file does not contain '
'member {member!s}'.format(member=MANIFEST_FILENAME),
)
text_content = to_text(
member_obj.read(),
errors='surrogate_or_strict',
)
try:
manifest = json.loads(text_content)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=MANIFEST_FILENAME),
)
return manifest['collection_info']
@contextmanager
def _tarfile_extract(
tar, # type: tarfile.TarFile
member, # type: tarfile.TarInfo
):
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.Optional[t.IO[bytes]]]]
tar_obj = tar.extractfile(member)
try:
yield member, tar_obj
finally:
if tar_obj is not None:
tar_obj.close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,174 |
Use of `retry_with_delays_and_condition` and `generate_jittered_backoff` may lead to no retries
|
### Summary
The way in which we are using `retry_with_delays_and_condition` along with `generate_jittered_backoff` may prevent subsequent failures from retrying if the generator was consumed by previous calls to `_call_galaxy`.
It appears as though the `backoff_iterator` in this case is global for all calls, and not refreshed per call to `_call_galaxy`:
https://github.com/ansible/ansible/blob/c564c6e21e4538b475df2ae4b3f66b73decff160/lib/ansible/galaxy/api.py#L328-L332
Currently every call to `_call_galaxy` consumes at least 1 item in the `backoff_iterator`, even when a retry isn't attempted, so after 6 calls, no-retries would ever be performed.
This may require making `backoff_iterator` take a callable, or something that can regenerate the iterator, instead of acting as a global state iterator.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80174
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T20:33:11Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/module_utils/api.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright: (c) 2015, Brian Coca, <[email protected]>
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
"""
This module adds shared support for generic api modules
In order to use this module, include it as part of a custom
module as shown below.
The 'api' module provides the following common argument specs:
* rate limit spec
- rate: number of requests per time unit (int)
- rate_limit: time window in which the limit is applied in seconds
* retry spec
- retries: number of attempts
- retry_pause: delay between attempts in seconds
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import functools
import random
import sys
import time
def rate_limit_argument_spec(spec=None):
"""Creates an argument spec for working with rate limiting"""
arg_spec = (dict(
rate=dict(type='int'),
rate_limit=dict(type='int'),
))
if spec:
arg_spec.update(spec)
return arg_spec
def retry_argument_spec(spec=None):
"""Creates an argument spec for working with retrying"""
arg_spec = (dict(
retries=dict(type='int'),
retry_pause=dict(type='float', default=1),
))
if spec:
arg_spec.update(spec)
return arg_spec
def basic_auth_argument_spec(spec=None):
arg_spec = (dict(
api_username=dict(type='str'),
api_password=dict(type='str', no_log=True),
api_url=dict(type='str'),
validate_certs=dict(type='bool', default=True)
))
if spec:
arg_spec.update(spec)
return arg_spec
def rate_limit(rate=None, rate_limit=None):
"""rate limiting decorator"""
minrate = None
if rate is not None and rate_limit is not None:
minrate = float(rate_limit) / float(rate)
def wrapper(f):
last = [0.0]
def ratelimited(*args, **kwargs):
if sys.version_info >= (3, 8):
real_time = time.process_time
else:
real_time = time.clock
if minrate is not None:
elapsed = real_time() - last[0]
left = minrate - elapsed
if left > 0:
time.sleep(left)
last[0] = real_time()
ret = f(*args, **kwargs)
return ret
return ratelimited
return wrapper
def retry(retries=None, retry_pause=1):
"""Retry decorator"""
def wrapper(f):
def retried(*args, **kwargs):
retry_count = 0
if retries is not None:
ret = None
while True:
retry_count += 1
if retry_count >= retries:
raise Exception("Retry limit exceeded: %d" % retries)
try:
ret = f(*args, **kwargs)
except Exception:
pass
if ret:
break
time.sleep(retry_pause)
return ret
return retried
return wrapper
def generate_jittered_backoff(retries=10, delay_base=3, delay_threshold=60):
"""The "Full Jitter" backoff strategy.
Ref: https://www.awsarchitectureblog.com/2015/03/backoff.html
:param retries: The number of delays to generate.
:param delay_base: The base time in seconds used to calculate the exponential backoff.
:param delay_threshold: The maximum time in seconds for any delay.
"""
for retry in range(0, retries):
yield random.randint(0, min(delay_threshold, delay_base * 2 ** retry))
def retry_never(exception_or_result):
return False
def retry_with_delays_and_condition(backoff_iterator, should_retry_error=None):
"""Generic retry decorator.
:param backoff_iterator: An iterable of delays in seconds.
:param should_retry_error: A callable that takes an exception of the decorated function and decides whether to retry or not (returns a bool).
"""
if should_retry_error is None:
should_retry_error = retry_never
def function_wrapper(function):
@functools.wraps(function)
def run_function(*args, **kwargs):
"""This assumes the function has not already been called.
If backoff_iterator is empty, we should still run the function a single time with no delay.
"""
call_retryable_function = functools.partial(function, *args, **kwargs)
for delay in backoff_iterator:
try:
return call_retryable_function()
except Exception as e:
if not should_retry_error(e):
raise
time.sleep(delay)
# Only or final attempt
return call_retryable_function()
return run_function
return function_wrapper
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,170 |
Extend use of `retry_with_delays_and_condition` within Galaxy API requests to retry on `TimeoutError`
|
### Summary
As of now, we only retry galaxy API requests when they result in error codes defined within `RETRY_HTTP_ERROR_CODES`.
Some times there are also transient timeout errors, that are not represented by these status codes, and instead raise a `TimeoutError`.
Evaluate extending the function used by `should_retry_error`, to also retry on `TimeoutError`.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80170
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T18:32:42Z |
python
| 2023-03-22T16:04:56Z |
changelogs/fragments/galaxy-improve-retries.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,170 |
Extend use of `retry_with_delays_and_condition` within Galaxy API requests to retry on `TimeoutError`
|
### Summary
As of now, we only retry galaxy API requests when they result in error codes defined within `RETRY_HTTP_ERROR_CODES`.
Some times there are also transient timeout errors, that are not represented by these status codes, and instead raise a `TimeoutError`.
Evaluate extending the function used by `should_retry_error`, to also retry on `TimeoutError`.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80170
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T18:32:42Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import pathlib
import re
import shutil
import sys
import textwrap
import time
import typing as t
from dataclasses import dataclass
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI, GalaxyError
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
# config definition by position: name, required, type
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
('timeout', False, 'int'),
]
# config definition fields
SERVER_ADDITIONAL = {
'v3': {'default': 'False'},
'validate_certs': {'cli': [{'name': 'validate_certs'}]},
'timeout': {'default': '60', 'cli': [{'name': 'timeout'}]},
'token': {'default': None},
}
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
# FIXME: use validate_certs context from Galaxy servers when downloading collections
# .get used here for when this is used in a non-CLI context
artifacts_manager_kwargs = {'validate_certs': context.CLIARGS.get('resolved_validate_certs', True)}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set or [''], key=len))
version_length = len(max(version_set or [''], key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError(f"{value} is not a valid signature count value")
return value
@dataclass
class RoleDistributionServer:
_api: t.Union[GalaxyAPI, None]
api_servers: list[GalaxyAPI]
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
class GalaxyCLI(CLI):
'''Command to manage Ansible roles and collections.
None of the CLI tools are designed to run concurrently with themselves.
Use an external scheduler and/or locking to ensure there are no clashing operations.
'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self.lazy_role_api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None)
common.add_argument('--timeout', dest='timeout', type=int,
help="The time to wait for operations against the galaxy server, defaults to 60s.")
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Install collection artifacts (tarballs) without contacting any distribution servers. '
'This does not apply to collections in remote Git repositories or URLs to remote tarballs.'
)
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
r_re = re.compile(r'^(?<!-)-[a-zA-Z]*r[a-zA-Z]*') # -r, -fr
contains_r = bool([a for a in self._raw_args if r_re.match(a)])
role_file_re = re.compile(r'--role-file($|=)') # --role-file foo, --role-file=foo
contains_role_file = bool([a for a in self._raw_args if role_file_re.match(a)])
if self._implicit_role and (contains_r or contains_role_file):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
# ensure we have 'usable' cli option
setattr(options, 'validate_certs', (None if options.ignore_certs is None else not options.ignore_certs))
# the default if validate_certs is None
setattr(options, 'resolved_validate_certs', (options.validate_certs if options.validate_certs is not None else not C.GALAXY_IGNORE_CERTS))
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
config_def = {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
if key in SERVER_ADDITIONAL:
config_def.update(SERVER_ADDITIONAL[key])
return config_def
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache', 'timeout']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Abuse the 'plugin config' by making 'galaxy_server' a type of plugin
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
# resolve the config created options above with existing config and user options
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
auth_url = server_options.pop('auth_url')
client_id = server_options.pop('client_id')
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
v3 = server_options.pop('v3')
if server_options['validate_certs'] is None:
server_options['validate_certs'] = context.CLIARGS['resolved_validate_certs']
validate_certs = server_options['validate_certs']
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username, server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
validate_certs = context.CLIARGS['resolved_validate_certs']
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs,
**galaxy_options
))
# checks api versions once a GalaxyRole makes an api call
# self.api can be used to evaluate the best server immediately
self.lazy_role_api = RoleDistributionServer(None, self.api_servers)
return context.CLIARGS['func']()
@property
def api(self):
return self.lazy_role_api.api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=context.CLIARGS['resolved_validate_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
for root, dirs, files in os.walk(b_obj_path, topdown=True):
for old_dir in dirs:
path = os.path.join(root, old_dir)
shutil.rmtree(path)
for old_file in files:
path = os.path.join(root, old_file)
os.unlink(path)
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path), follow_symlinks=False)
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if os.path.exists(b_dir_path):
continue
b_src_dir = to_bytes(os.path.join(root, d), errors='surrogate_or_strict')
if os.path.islink(b_src_dir):
shutil.copyfile(b_src_dir, b_dir_path, follow_symlinks=False)
else:
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except GalaxyError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = AnsibleCollectionConfig.collection_paths
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
try:
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
except KeyError:
if self._implicit_role:
raise AnsibleError(
'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" '
'instead of "ansible-galaxy install".'
)
raise
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
offline=context.CLIARGS.get('offline', False),
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
# NOTE: the meta file is also required for installing the role, not just dependencies
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata_dependencies + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
if artifacts_manager is not None:
artifacts_manager.require_build_metadata = False
output_format = context.CLIARGS['output_format']
collection_name = context.CLIARGS['collection']
default_collections_path = set(C.COLLECTIONS_PATHS)
collections_search_paths = (
set(context.CLIARGS['collections_path'] or []) | default_collections_path | set(AnsibleCollectionConfig.collection_paths)
)
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
namespace_filter = None
collection_filter = None
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace_filter, collection_filter = collection_name.split('.')
collections = list(find_existing_collections(
list(collections_search_paths),
artifacts_manager,
namespace_filter=namespace_filter,
collection_filter=collection_filter,
dedupe=False
))
seen = set()
fqcn_width, version_width = _get_collection_widths(collections)
for collection in sorted(collections, key=lambda c: c.src):
collection_found = True
collection_path = pathlib.Path(to_text(collection.src)).parent.parent.as_posix()
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
else:
if collection_path not in seen:
_display_header(
collection_path,
'Collection',
'Version',
fqcn_width,
version_width
)
seen.add(collection_path)
_display_collection(collection, fqcn_width, version_width)
path_found = False
for path in collections_search_paths:
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(path))
elif os.path.exists(path) and not os.path.isdir(path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(path))
else:
path_found = True
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not collections and not path_found:
raise AnsibleOptionsError(
"- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])
)
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.warning("No roles match your search.")
return 0
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return 0
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,170 |
Extend use of `retry_with_delays_and_condition` within Galaxy API requests to retry on `TimeoutError`
|
### Summary
As of now, we only retry galaxy API requests when they result in error codes defined within `RETRY_HTTP_ERROR_CODES`.
Some times there are also transient timeout errors, that are not represented by these status codes, and instead raise a `TimeoutError`.
Evaluate extending the function used by `should_retry_error`, to also retry on `TimeoutError`.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80170
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T18:32:42Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/galaxy/api.py
|
# (C) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import datetime
import functools
import hashlib
import json
import os
import stat
import tarfile
import time
import threading
from urllib.error import HTTPError
from urllib.parse import quote as urlquote, urlencode, urlparse, parse_qs, urljoin
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.api import retry_with_delays_and_condition
from ansible.module_utils.api import generate_jittered_backoff
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url, prepare_multipart
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
from ansible.utils.path import makedirs_safe
display = Display()
_CACHE_LOCK = threading.Lock()
COLLECTION_PAGE_SIZE = 100
RETRY_HTTP_ERROR_CODES = [ # TODO: Allow user-configuration
429, # Too Many Requests
520, # Galaxy rate limit error code (Cloudflare unknown error)
]
def cache_lock(func):
def wrapped(*args, **kwargs):
with _CACHE_LOCK:
return func(*args, **kwargs)
return wrapped
def is_rate_limit_exception(exception):
# Note: cloud.redhat.com masks rate limit errors with 403 (Forbidden) error codes.
# Since 403 could reflect the actual problem (such as an expired token), we should
# not retry by default.
return isinstance(exception, GalaxyError) and exception.http_code in RETRY_HTTP_ERROR_CODES
def g_connect(versions):
"""
Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the
endpoint.
:param versions: A list of API versions that the function supports.
"""
def decorator(method):
def wrapped(self, *args, **kwargs):
if not self._available_api_versions:
display.vvvv("Initial connection to galaxy_server: %s" % self.api_server)
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
n_url = self.api_server
error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url)
if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/':
n_url = 'https://galaxy.ansible.com/api/'
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except (AnsibleError, GalaxyError, ValueError, KeyError) as err:
# Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
if n_url.endswith('/api') or n_url.endswith('/api/'):
raise
# Let exceptions here bubble up but raise the original if this returns a 404 (/api/ wasn't found).
n_url = _urljoin(n_url, '/api/')
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg, cache=True)
except GalaxyError as new_err:
if new_err.http_code == 404:
raise err
raise
if 'available_versions' not in data:
raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available "
"on %s" % (n_url, self.api_server))
# Update api_server to point to the "real" API root, which in this case could have been the configured
# url + '/api/' appended.
self.api_server = n_url
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
available_versions = data.get('available_versions', {u'v1': u'v1/'})
if list(available_versions.keys()) == [u'v1']:
available_versions[u'v2'] = u'v2/'
self._available_api_versions = available_versions
display.vvvv("Found API version '%s' with Galaxy server %s (%s)"
% (', '.join(available_versions.keys()), self.name, self.api_server))
# Verify that the API versions the function works with are available on the server specified.
available_versions = set(self._available_api_versions.keys())
common_versions = set(versions).intersection(available_versions)
if not common_versions:
raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s"
% (method.__name__, ", ".join(versions), ", ".join(available_versions),
self.name, self.api_server))
return method(self, *args, **kwargs)
return wrapped
return decorator
def get_cache_id(url):
""" Gets the cache ID for the URL specified. """
url_info = urlparse(url)
port = None
try:
port = url_info.port
except ValueError:
pass # While the URL is probably invalid, let the caller figure that out when using it
# Cannot use netloc because it could contain credentials if the server specified had them in there.
return '%s:%s' % (url_info.hostname, port or '')
@cache_lock
def _load_cache(b_cache_path):
""" Loads the cache file requested if possible. The file must not be world writable. """
cache_version = 1
if not os.path.isfile(b_cache_path):
display.vvvv("Creating Galaxy API response cache file at '%s'" % to_text(b_cache_path))
with open(b_cache_path, 'w'):
os.chmod(b_cache_path, 0o600)
cache_mode = os.stat(b_cache_path).st_mode
if cache_mode & stat.S_IWOTH:
display.warning("Galaxy cache has world writable access (%s), ignoring it as a cache source."
% to_text(b_cache_path))
return
with open(b_cache_path, mode='rb') as fd:
json_val = to_text(fd.read(), errors='surrogate_or_strict')
try:
cache = json.loads(json_val)
except ValueError:
cache = None
if not isinstance(cache, dict) or cache.get('version', None) != cache_version:
display.vvvv("Galaxy cache file at '%s' has an invalid version, clearing" % to_text(b_cache_path))
cache = {'version': cache_version}
# Set the cache after we've cleared the existing entries
with open(b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(cache), errors='surrogate_or_strict'))
return cache
def _urljoin(*args):
return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a)
class GalaxyError(AnsibleError):
""" Error for bad Galaxy server responses. """
def __init__(self, http_error, message):
super(GalaxyError, self).__init__(message)
self.http_code = http_error.code
self.url = http_error.geturl()
try:
http_msg = to_text(http_error.read())
err_info = json.loads(http_msg)
except (AttributeError, ValueError):
err_info = {}
url_split = self.url.split('/')
if 'v2' in url_split:
galaxy_msg = err_info.get('message', http_error.reason)
code = err_info.get('code', 'Unknown')
full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code)
elif 'v3' in url_split:
errors = err_info.get('errors', [])
if not errors:
errors = [{}] # Defaults are set below, we just need to make sure 1 error is present.
message_lines = []
for error in errors:
error_msg = error.get('detail') or error.get('title') or http_error.reason
error_code = error.get('code') or 'Unknown'
message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code)
message_lines.append(message_line)
full_error_msg = "%s %s" % (message, ', '.join(message_lines))
else:
# v1 and unknown API endpoints
galaxy_msg = err_info.get('default', http_error.reason)
full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg)
self.message = to_native(full_error_msg)
# Keep the raw string results for the date. It's too complex to parse as a datetime object and the various APIs return
# them in different formats.
CollectionMetadata = collections.namedtuple('CollectionMetadata', ['namespace', 'name', 'created_str', 'modified_str'])
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies, signatures_url, signatures):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
:param namespace: The namespace name.
:param name: The collection name.
:param version: The version that the metadata refers to.
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
:param signatures_url: The URL to the specific version of the collection.
:param signatures: The list of signatures found at the signatures_url.
"""
self.namespace = namespace
self.name = name
self.version = version
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
self.signatures_url = signatures_url
self.signatures = signatures
@functools.total_ordering
class GalaxyAPI:
""" This class is meant to be used as a API client for an Ansible Galaxy server """
def __init__(
self, galaxy, name, url,
username=None, password=None, token=None, validate_certs=True,
available_api_versions=None,
clear_response_cache=False, no_cache=True,
priority=float('inf'),
timeout=60,
):
self.galaxy = galaxy
self.name = name
self.username = username
self.password = password
self.token = token
self.api_server = url
self.validate_certs = validate_certs
self.timeout = timeout
self._available_api_versions = available_api_versions or {}
self._priority = priority
self._server_timeout = timeout
b_cache_dir = to_bytes(C.GALAXY_CACHE_DIR, errors='surrogate_or_strict')
makedirs_safe(b_cache_dir, mode=0o700)
self._b_cache_path = os.path.join(b_cache_dir, b'api.json')
if clear_response_cache:
with _CACHE_LOCK:
if os.path.exists(self._b_cache_path):
display.vvvv("Clearing cache file (%s)" % to_text(self._b_cache_path))
os.remove(self._b_cache_path)
self._cache = None
if not no_cache:
self._cache = _load_cache(self._b_cache_path)
display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs))
def __str__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as a native string representation."""
return to_native(self.name)
def __unicode__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as a unicode/text string representation."""
return to_text(self.name)
def __repr__(self):
# type: (GalaxyAPI) -> str
"""Render GalaxyAPI as an inspectable string representation."""
return (
'<{instance!s} "{name!s}" @ {url!s} with priority {priority!s}>'.
format(
instance=self, name=self.name,
priority=self._priority, url=self.api_server,
)
)
def __lt__(self, other_galaxy_api):
# type: (GalaxyAPI, GalaxyAPI) -> bool
"""Return whether the instance priority is higher than other."""
if not isinstance(other_galaxy_api, self.__class__):
return NotImplemented
return (
self._priority > other_galaxy_api._priority or
self.name < self.name
)
@property # type: ignore[misc] # https://github.com/python/mypy/issues/1362
@g_connect(['v1', 'v2', 'v3'])
def available_api_versions(self):
# Calling g_connect will populate self._available_api_versions
return self._available_api_versions
@retry_with_delays_and_condition(
backoff_iterator=generate_jittered_backoff(retries=6, delay_base=2, delay_threshold=40),
should_retry_error=is_rate_limit_exception
)
def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None,
cache=False, cache_key=None):
url_info = urlparse(url)
cache_id = get_cache_id(url)
if not cache_key:
cache_key = url_info.path
query = parse_qs(url_info.query)
if cache and self._cache:
server_cache = self._cache.setdefault(cache_id, {})
iso_datetime_format = '%Y-%m-%dT%H:%M:%SZ'
valid = False
if cache_key in server_cache:
expires = datetime.datetime.strptime(server_cache[cache_key]['expires'], iso_datetime_format)
valid = datetime.datetime.utcnow() < expires
is_paginated_url = 'page' in query or 'offset' in query
if valid and not is_paginated_url:
# Got a hit on the cache and we aren't getting a paginated response
path_cache = server_cache[cache_key]
if path_cache.get('paginated'):
if '/v3/' in cache_key:
res = {'links': {'next': None}}
else:
res = {'next': None}
# Technically some v3 paginated APIs return in 'data' but the caller checks the keys for this so
# always returning the cache under results is fine.
res['results'] = []
for result in path_cache['results']:
res['results'].append(result)
else:
res = path_cache['results']
return res
elif not is_paginated_url:
# The cache entry had expired or does not exist, start a new blank entry to be filled later.
expires = datetime.datetime.utcnow()
expires += datetime.timedelta(days=1)
server_cache[cache_key] = {
'expires': expires.strftime(iso_datetime_format),
'paginated': False,
}
headers = headers or {}
self._add_auth_token(headers, url, required=auth_required)
try:
display.vvvv("Calling Galaxy at %s" % url)
resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers,
method=method, timeout=self._server_timeout, http_agent=user_agent(), follow_redirects='safe')
except HTTPError as e:
raise GalaxyError(e, error_context_msg)
except Exception as e:
raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e)))
resp_data = to_text(resp.read(), errors='surrogate_or_strict')
try:
data = json.loads(resp_data)
except ValueError:
raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s"
% (resp.url, to_native(resp_data)))
if cache and self._cache:
path_cache = self._cache[cache_id][cache_key]
# v3 can return data or results for paginated results. Scan the result so we can determine what to cache.
paginated_key = None
for key in ['data', 'results']:
if key in data:
paginated_key = key
break
if paginated_key:
path_cache['paginated'] = True
results = path_cache.setdefault('results', [])
for result in data[paginated_key]:
results.append(result)
else:
path_cache['results'] = data
return data
def _add_auth_token(self, headers, url, token_type=None, required=False):
# Don't add the auth token if one is already present
if 'Authorization' in headers:
return
if not self.token and required:
raise AnsibleError("No access token or username set. A token can be set with --api-key "
"or at {0}.".format(to_native(C.GALAXY_TOKEN_PATH)))
if self.token:
headers.update(self.token.headers())
@cache_lock
def _set_cache(self):
with open(self._b_cache_path, mode='wb') as fd:
fd.write(to_bytes(json.dumps(self._cache), errors='surrogate_or_strict'))
@g_connect(['v1'])
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/'
args = urlencode({"github_token": github_token})
try:
resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST", http_agent=user_agent(), timeout=self._server_timeout)
except HTTPError as e:
raise GalaxyError(e, 'Attempting to authenticate to galaxy')
except Exception as e:
raise AnsibleError('Unable to authenticate to galaxy: %s' % to_native(e), orig_exc=e)
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
return data
@g_connect(['v1'])
def create_import_task(self, github_user, github_repo, reference=None, role_name=None):
"""
Post an import request
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/'
args = {
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
}
if role_name:
args['alternate_role_name'] = role_name
elif github_repo.startswith('ansible-role'):
args['alternate_role_name'] = github_repo[len('ansible-role') + 1:]
data = self._call_galaxy(url, args=urlencode(args), method="POST")
if data.get('results', None):
return data['results']
return data
@g_connect(['v1'])
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports")
if task_id is not None:
url = "%s?id=%d" % (url, task_id)
elif github_user is not None and github_repo is not None:
url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self._call_galaxy(url)
return data['results']
@g_connect(['v1'])
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name.
"""
role_name = to_text(urlquote(to_bytes(role_name)))
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except Exception:
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles",
"?owner__username=%s&name=%s" % (user_name, role_name))
data = self._call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
@g_connect(['v1'])
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
results = []
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related,
"?page_size=50")
data = self._call_galaxy(url)
results = data['results']
done = (data.get('next_link', None) is None)
# https://github.com/ansible/ansible/issues/64355
# api_server contains part of the API path but next_link includes the /api part so strip it out.
url_info = urlparse(self.api_server)
base_url = "%s://%s/" % (url_info.scheme, url_info.netloc)
while not done:
url = _urljoin(base_url, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
except Exception as e:
display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s"
% (role_id, related, to_text(e)))
return results
@g_connect(['v1'])
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size")
data = self._call_galaxy(url)
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error)))
@g_connect(['v1'])
def search_roles(self, search, **kwargs):
search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?")
if search:
search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search)))
tags = kwargs.get('tags', None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, string_types):
tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self._call_galaxy(search_url)
return data
@g_connect(['v1'])
def add_secret(self, source, github_user, github_repo, secret):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/'
args = urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self._call_galaxy(url, args=args, method="POST")
return data
@g_connect(['v1'])
def list_secrets(self):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets")
data = self._call_galaxy(url, auth_required=True)
return data
@g_connect(['v1'])
def remove_secret(self, secret_id):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/'
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
@g_connect(['v1'])
def delete_role(self, github_user, github_repo):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole",
"?github_user=%s&github_repo=%s" % (github_user, github_repo))
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
# Collection APIs #
@g_connect(['v2', 'v3'])
def publish_collection(self, collection_path):
"""
Publishes a collection to a Galaxy server and returns the import task URI.
:param collection_path: The path to the collection tarball to publish.
:return: The import task URI that contains the import results.
"""
display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server))
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path))
elif not tarfile.is_tarfile(b_collection_path):
raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection "
"build' to create a proper release artifact." % to_native(collection_path))
with open(b_collection_path, 'rb') as collection_tar:
sha256 = secure_hash_s(collection_tar.read(), hash_func=hashlib.sha256)
content_type, b_form_data = prepare_multipart(
{
'sha256': sha256,
'file': {
'filename': b_collection_path,
'mime_type': 'application/octet-stream',
},
}
)
headers = {
'Content-type': content_type,
'Content-length': len(b_form_data),
}
if 'v3' in self.available_api_versions:
n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/'
else:
n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/'
resp = self._call_galaxy(n_url, args=b_form_data, headers=headers, method='POST', auth_required=True,
error_context_msg='Error when publishing collection to %s (%s)'
% (self.name, self.api_server))
return resp['task']
@g_connect(['v2', 'v3'])
def wait_import_task(self, task_id, timeout=0):
"""
Waits until the import process on the Galaxy server has completed or the timeout is reached.
:param task_id: The id of the import task to wait for. This can be parsed out of the return
value for GalaxyAPI.publish_collection.
:param timeout: The timeout in seconds, 0 is no timeout.
"""
state = 'waiting'
data = None
# Construct the appropriate URL per version
if 'v3' in self.available_api_versions:
full_url = _urljoin(self.api_server, self.available_api_versions['v3'],
'imports/collections', task_id, '/')
else:
full_url = _urljoin(self.api_server, self.available_api_versions['v2'],
'collection-imports', task_id, '/')
display.display("Waiting until Galaxy import task %s has completed" % full_url)
start = time.time()
wait = 2
while timeout == 0 or (time.time() - start) < timeout:
try:
data = self._call_galaxy(full_url, method='GET', auth_required=True,
error_context_msg='Error when getting import task results at %s' % full_url)
except GalaxyError as e:
if e.http_code != 404:
raise
# The import job may not have started, and as such, the task url may not yet exist
display.vvv('Galaxy import process has not started, wait %s seconds before trying again' % wait)
time.sleep(wait)
continue
state = data.get('state', 'waiting')
if data.get('finished_at', None):
break
display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again'
% (state, wait))
time.sleep(wait)
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
wait = min(30, wait * 1.5)
if state == 'waiting':
raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'"
% to_native(full_url))
for message in data.get('messages', []):
level = message['level']
if level.lower() == 'error':
display.error("Galaxy import error message: %s" % message['message'])
elif level.lower() == 'warning':
display.warning("Galaxy import warning message: %s" % message['message'])
else:
display.vvv("Galaxy import message: %s - %s" % (level, message['message']))
if state == 'failed':
code = to_native(data['error'].get('code', 'UNKNOWN'))
description = to_native(
data['error'].get('description', "Unknown error, see %s for more details" % full_url))
raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code))
@g_connect(['v2', 'v3'])
def get_collection_metadata(self, namespace, name):
"""
Gets the collection information from the Galaxy server about a specific Collection.
:param namespace: The collection namespace.
:param name: The collection name.
return: CollectionMetadata about the collection.
"""
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
field_map = [
('created_str', 'created_at'),
('modified_str', 'updated_at'),
]
else:
api_path = self.available_api_versions['v2']
field_map = [
('created_str', 'created'),
('modified_str', 'modified'),
]
info_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, '/')
error_context_msg = 'Error when getting the collection info for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
data = self._call_galaxy(info_url, error_context_msg=error_context_msg)
metadata = {}
for name, api_field in field_map:
metadata[name] = data.get(api_field, None)
return CollectionMetadata(namespace, name, **metadata)
@g_connect(['v2', 'v3'])
def get_collection_version_metadata(self, namespace, name, version):
"""
Gets the collection information from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: CollectionVersionMetadata about the collection at the version requested.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
signatures = data.get('signatures') or []
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'], data['href'], signatures)
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
"""
Gets a list of available versions for a collection on a Galaxy server.
:param namespace: The collection namespace.
:param name: The collection name.
:return: A list of versions that are available.
"""
relative_link = False
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
pagination_path = ['links', 'next']
relative_link = True # AH pagination results are relative an not an absolute URI.
else:
api_path = self.available_api_versions['v2']
pagination_path = ['next']
page_size_name = 'limit' if 'v3' in self.available_api_versions else 'page_size'
versions_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions', '/?%s=%d' % (page_size_name, COLLECTION_PAGE_SIZE))
versions_url_info = urlparse(versions_url)
cache_key = versions_url_info.path
# We should only rely on the cache if the collection has not changed. This may slow things down but it ensures
# we are not waiting a day before finding any new collections that have been published.
if self._cache:
server_cache = self._cache.setdefault(get_cache_id(versions_url), {})
modified_cache = server_cache.setdefault('modified', {})
try:
modified_date = self.get_collection_metadata(namespace, name).modified_str
except GalaxyError as err:
if err.http_code != 404:
raise
# No collection found, return an empty list to keep things consistent with the various APIs
return []
cached_modified_date = modified_cache.get('%s.%s' % (namespace, name), None)
if cached_modified_date != modified_date:
modified_cache['%s.%s' % (namespace, name)] = modified_date
if versions_url_info.path in server_cache:
del server_cache[cache_key]
self._set_cache()
error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
try:
data = self._call_galaxy(versions_url, error_context_msg=error_context_msg, cache=True, cache_key=cache_key)
except GalaxyError as err:
if err.http_code != 404:
raise
# v3 doesn't raise a 404 so we need to mimick the empty response from APIs that do.
return []
if 'data' in data:
# v3 automation-hub is the only known API that uses `data`
# since v3 pulp_ansible does not, we cannot rely on version
# to indicate which key to use
results_key = 'data'
else:
results_key = 'results'
versions = []
while True:
versions += [v['version'] for v in data[results_key]]
next_link = data
for path in pagination_path:
next_link = next_link.get(path, {})
if not next_link:
break
elif relative_link:
# TODO: This assumes the pagination result is relative to the root server. Will need to be verified
# with someone who knows the AH API.
# Remove the query string from the versions_url to use the next_link's query
versions_url = urljoin(versions_url, urlparse(versions_url).path)
next_link = versions_url.replace(versions_url_info.path, next_link)
data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'),
error_context_msg=error_context_msg, cache=True, cache_key=cache_key)
self._set_cache()
return versions
@g_connect(['v2', 'v3'])
def get_collection_signatures(self, namespace, name, version):
"""
Gets the collection signatures from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Version of the collection to get the information for.
:return: A list of signature strings.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version, '/']
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg, cache=True)
self._set_cache()
try:
signatures = data["signatures"]
except KeyError:
# Noisy since this is used by the dep resolver, so require more verbosity than Galaxy calls
display.vvvvvv(f"Server {self.api_server} has not signed {namespace}.{name}:{version}")
return []
else:
return [signature_info["signature"] for signature_info in signatures]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,170 |
Extend use of `retry_with_delays_and_condition` within Galaxy API requests to retry on `TimeoutError`
|
### Summary
As of now, we only retry galaxy API requests when they result in error codes defined within `RETRY_HTTP_ERROR_CODES`.
Some times there are also transient timeout errors, that are not represented by these status codes, and instead raise a `TimeoutError`.
Evaluate extending the function used by `should_retry_error`, to also retry on `TimeoutError`.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80170
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T18:32:42Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/galaxy/collection/concrete_artifact_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Concrete collection candidate management helper module."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import subprocess
import typing as t
from contextlib import contextmanager
from hashlib import sha256
from urllib.error import URLError
from urllib.parse import urldefrag
from shutil import rmtree
from tempfile import mkdtemp
if t.TYPE_CHECKING:
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.token import GalaxyToken
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
import yaml
display = Display()
MANIFEST_FILENAME = 'MANIFEST.json'
class ConcreteArtifactsManager:
"""Manager for on-disk collection artifacts.
It is responsible for:
* downloading remote collections from Galaxy-compatible servers and
direct links to tarballs or SCM repositories
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True, keyring=None, timeout=60, required_signature_count=None, ignore_signature_errors=None):
# type: (bytes, bool, str, int, str, list[str]) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: dict[Candidate | Requirement, bytes]
self._artifact_meta_cache = {} # type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None | t.Type[Sentinel]]]
self._galaxy_collection_cache = {} # type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: dict[Candidate, tuple[str, list[dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: dict[str, str]
self._keyring = keyring # type: str
self.timeout = timeout # type: int
self._required_signature_count = required_signature_count # type: str
self._ignore_signature_errors = ignore_signature_errors # type: list[str]
self._require_build_metadata = True # type: bool
@property
def keyring(self):
return self._keyring
@property
def required_successful_signature_count(self):
return self._required_signature_count
@property
def ignore_signature_errors(self):
if self._ignore_signature_errors is None:
return []
return self._ignore_signature_errors
@property
def require_build_metadata(self):
# type: () -> bool
return self._require_build_metadata
@require_build_metadata.setter
def require_build_metadata(self, value):
# type: (bool) -> None
self._require_build_metadata = value
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> dict[str, t.Union[str, list[dict[str, str]]]]
server = collection.src.api_server
try:
download_url = self._galaxy_collection_cache[collection][0]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a Galaxy-stored collection, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._galaxy_artifact_cache[collection]
except KeyError:
pass
try:
url, sha256_hash, token = self._galaxy_collection_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
display.vvvv(
"Fetching a collection tarball for '{collection!s}' from "
'Ansible Galaxy'.format(collection=collection),
)
try:
b_artifact_path = _download_file(
url,
self._b_working_directory,
expected_hash=sha256_hash,
validate_certs=self._validate_certs,
token=token,
) # type: bytes
except URLError as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
else:
display.vvv(
"Collection '{coll!s}' obtained from "
'server {server!s} {url!s}'.format(
coll=collection, server=collection.src or 'Galaxy',
url=collection.src.api_server if collection.src is not None
else '',
)
)
self._galaxy_artifact_cache[collection] = b_artifact_path
return b_artifact_path
def get_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a concrete collection pointer, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._artifact_cache[collection.src]
except KeyError:
pass
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
if collection.is_url:
display.vvvv(
"Collection requirement '{collection!s}' is a URL "
'to a tar artifact'.format(collection=collection.fqcn),
)
try:
b_artifact_path = _download_file(
collection.src,
self._b_working_directory,
expected_hash=None, # NOTE: URLs don't support checksums
validate_certs=self._validate_certs,
timeout=self.timeout
)
except Exception as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
elif collection.is_scm:
b_artifact_path = _extract_collection_from_git(
collection.src,
collection.ver,
self._b_working_directory,
)
elif collection.is_file or collection.is_dir or collection.is_subdirs:
b_artifact_path = to_bytes(collection.src)
else:
# NOTE: This may happen `if collection.is_online_index_pointer`
raise RuntimeError(
'The artifact is of an unexpected type {art_type!s}'.
format(art_type=collection.type)
)
self._artifact_cache[collection.src] = b_artifact_path
return b_artifact_path
def _get_direct_collection_namespace(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['namespace'] # type: ignore[return-value]
def _get_direct_collection_name(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['name'] # type: ignore[return-value]
def get_direct_collection_fqcn(self, collection):
# type: (Candidate) -> t.Optional[str]
"""Extract FQCN from the given on-disk collection artifact.
If the collection is virtual, ``None`` is returned instead
of a string.
"""
if collection.is_virtual:
# NOTE: should it be something like "<virtual>"?
return None
return '.'.join(( # type: ignore[type-var]
self._get_direct_collection_namespace(collection), # type: ignore[arg-type]
self._get_direct_collection_name(collection),
))
def get_direct_collection_version(self, collection):
# type: (t.Union[Candidate, Requirement]) -> str
"""Extract version from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
def get_direct_collection_dependencies(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, str]
"""Extract deps from the given on-disk collection artifact."""
collection_dependencies = self.get_direct_collection_meta(collection)['dependencies']
if collection_dependencies is None:
collection_dependencies = {}
return collection_dependencies # type: ignore[return-value]
def get_direct_collection_meta(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, t.Union[str, dict[str, str], list[str], None, t.Type[Sentinel]]]
"""Extract meta from the given on-disk collection artifact."""
try: # FIXME: use unique collection identifier as a cache key?
return self._artifact_meta_cache[collection.src]
except KeyError:
b_artifact_path = self.get_artifact_path(collection)
if collection.is_url or collection.is_file:
collection_meta = _get_meta_from_tar(b_artifact_path)
elif collection.is_dir: # should we just build a coll instead?
# FIXME: what if there's subdirs?
try:
collection_meta = _get_meta_from_dir(b_artifact_path, self.require_build_metadata)
except LookupError as lookup_err:
raise AnsibleError(
'Failed to find the collection dir deps: {err!s}'.
format(err=to_native(lookup_err)),
) from lookup_err
elif collection.is_scm:
collection_meta = {
'name': None,
'namespace': None,
'dependencies': {to_native(b_artifact_path): '*'},
'version': '*',
}
elif collection.is_subdirs:
collection_meta = {
'name': None,
'namespace': None,
# NOTE: Dropping b_artifact_path since it's based on src anyway
'dependencies': dict.fromkeys(
map(to_native, collection.namespace_collection_paths),
'*',
),
'version': '*',
}
else:
raise RuntimeError
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
def under_tmpdir(
cls,
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
required_signature_count=None, # type: str
ignore_signature_errors=None, # type: list[str]
require_build_metadata=True, # type: bool
): # type: (...) -> t.Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
This method returns a context manager that allocates and cleans
up a temporary directory for caching the collection artifacts
during the dependency resolution process.
"""
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
temp_path = mkdtemp(
dir=to_bytes(temp_dir_base, errors='surrogate_or_strict'),
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(
b_temp_path,
validate_certs,
keyring=keyring,
required_signature_count=required_signature_count,
ignore_signature_errors=ignore_signature_errors
)
finally:
rmtree(b_temp_path)
def parse_scm(collection, version):
"""Extract name, version, path and subdir out of the SCM pointer."""
if ',' in collection:
collection, version = collection.split(',', 1)
elif version == '*' or not version:
version = 'HEAD'
if collection.startswith('git+'):
path = collection[4:]
else:
path = collection
path, fragment = urldefrag(path)
fragment = fragment.strip(os.path.sep)
if path.endswith(os.path.sep + '.git'):
name = path.split(os.path.sep)[-2]
elif '://' not in path and '@' not in path:
name = path
else:
name = path.split('/')[-1]
if name.endswith('.git'):
name = name[:-4]
return name, version, path, fragment
def _extract_collection_from_git(repo_url, coll_ver, b_path):
name, version, git_url, fragment = parse_scm(repo_url, coll_ver)
b_checkout_path = mkdtemp(
dir=b_path,
prefix=to_bytes(name, errors='surrogate_or_strict'),
) # type: bytes
try:
git_executable = get_bin_path('git')
except ValueError as err:
raise AnsibleError(
"Could not find git executable to extract the collection from the Git repository `{repo_url!s}`.".
format(repo_url=to_native(git_url))
) from err
# Perform a shallow clone if simply cloning HEAD
if version == 'HEAD':
git_clone_cmd = git_executable, 'clone', '--depth=1', git_url, to_text(b_checkout_path)
else:
git_clone_cmd = git_executable, 'clone', git_url, to_text(b_checkout_path)
# FIXME: '--branch', version
try:
subprocess.check_call(git_clone_cmd)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to clone a Git repository from `{repo_url!s}`.'.
format(repo_url=to_native(git_url)),
) from proc_err
git_switch_cmd = git_executable, 'checkout', to_text(version)
try:
subprocess.check_call(git_switch_cmd, cwd=b_checkout_path)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to switch a cloned Git repo `{repo_url!s}` '
'to the requested revision `{commitish!s}`.'.
format(
commitish=to_native(version),
repo_url=to_native(git_url),
),
) from proc_err
return (
os.path.join(b_checkout_path, to_bytes(fragment))
if fragment else b_checkout_path
)
# FIXME: use random subdirs while preserving the file names
def _download_file(url, b_path, expected_hash, validate_certs, token=None, timeout=60):
# type: (str, bytes, t.Optional[str], bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
b_tarball_name = to_bytes(
url.rsplit('/', 1)[1], errors='surrogate_or_strict',
)
b_file_name = b_tarball_name[:-len('.tar.gz')]
b_tarball_dir = mkdtemp(
dir=b_path,
prefix=b'-'.join((b_file_name, b'')),
) # type: bytes
b_file_path = os.path.join(b_tarball_dir, b_tarball_name)
display.display("Downloading %s to %s" % (url, to_text(b_tarball_dir)))
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
resp = open_url(
to_native(url, errors='surrogate_or_strict'),
validate_certs=validate_certs,
headers=None if token is None else token.headers(),
unredirected_headers=['Authorization'], http_agent=user_agent(),
timeout=timeout
)
with open(b_file_path, 'wb') as download_file: # type: t.BinaryIO
actual_hash = _consume_file(resp, write_to=download_file)
if expected_hash:
display.vvvv(
'Validating downloaded file hash {actual_hash!s} with '
'expected hash {expected_hash!s}'.
format(actual_hash=actual_hash, expected_hash=expected_hash)
)
if expected_hash != actual_hash:
raise AnsibleError('Mismatch artifact hash with downloaded file')
return b_file_path
def _consume_file(read_from, write_to=None):
# type: (t.BinaryIO, t.BinaryIO) -> str
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
def _normalize_galaxy_yml_manifest(
galaxy_yml, # type: dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
b_galaxy_yml_path, # type: bytes
require_build_metadata=True, # type: bool
):
# type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml_schema = (
get_collections_galaxy_meta_info()
) # type: list[dict[str, t.Any]] # FIXME: <--
# FIXME: πmaybe precise type: list[dict[str, t.Union[bool, str, list[str]]]]
mandatory_keys = set()
string_keys = set() # type: set[str]
list_keys = set() # type: set[str]
dict_keys = set() # type: set[str]
sentinel_keys = set() # type: set[str]
for info in galaxy_yml_schema:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
'sentinel': sentinel_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(mandatory_keys | string_keys | list_keys | dict_keys | sentinel_keys)
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
msg = (
"The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys)))
)
if require_build_metadata:
raise AnsibleError(msg)
display.warning(msg)
raise ValueError(msg)
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val] # type: ignore[list-item]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
for optional_sentinel in sentinel_keys:
if optional_sentinel not in galaxy_yml:
galaxy_yml[optional_sentinel] = Sentinel
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
if not galaxy_yml.get('version'):
galaxy_yml['version'] = '*'
return galaxy_yml
def _get_meta_from_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
try:
return _get_meta_from_installed_dir(b_path)
except LookupError:
return _get_meta_from_src_dir(b_path, require_build_metadata)
def _get_meta_from_src_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml = os.path.join(b_path, _GALAXY_YAML)
if not os.path.isfile(galaxy_yml):
raise LookupError(
"The collection galaxy.yml path '{path!s}' does not exist.".
format(path=to_native(galaxy_yml))
)
with open(galaxy_yml, 'rb') as manifest_file_obj:
try:
manifest = yaml_load(manifest_file_obj)
except yaml.error.YAMLError as yaml_err:
raise AnsibleError(
"Failed to parse the galaxy.yml at '{path!s}' with "
'the following error:\n{err_txt!s}'.
format(
path=to_native(galaxy_yml),
err_txt=to_native(yaml_err),
),
) from yaml_err
if not isinstance(manifest, dict):
if require_build_metadata:
raise AnsibleError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
# Valid build metadata is not required by ansible-galaxy list. Raise ValueError to fall back to implicit metadata.
display.warning(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
raise ValueError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
return _normalize_galaxy_yml_manifest(manifest, galaxy_yml, require_build_metadata)
def _get_json_from_installed_dir(
b_path, # type: bytes
filename, # type: str
): # type: (...) -> dict
b_json_filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
try:
with open(b_json_filepath, 'rb') as manifest_fd:
b_json_text = manifest_fd.read()
except (IOError, OSError):
raise LookupError(
"The collection {manifest!s} path '{path!s}' does not exist.".
format(
manifest=filename,
path=to_native(b_json_filepath),
)
)
manifest_txt = to_text(b_json_text, errors='surrogate_or_strict')
try:
manifest = json.loads(manifest_txt)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=filename),
)
return manifest
def _get_meta_from_installed_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
manifest = _get_json_from_installed_dir(b_path, MANIFEST_FILENAME)
collection_info = manifest['collection_info']
version = collection_info.get('version')
if not version:
raise AnsibleError(
u'Collection metadata file `{manifest_filename!s}` at `{meta_file!s}` is expected '
u'to have a valid SemVer version value but got {version!s}'.
format(
manifest_filename=MANIFEST_FILENAME,
meta_file=to_text(b_path),
version=to_text(repr(version)),
),
)
return collection_info
def _get_meta_from_tar(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
if not tarfile.is_tarfile(b_path):
raise AnsibleError(
"Collection artifact at '{path!s}' is not a valid tar file.".
format(path=to_native(b_path)),
)
with tarfile.open(b_path, mode='r') as collection_tar: # type: tarfile.TarFile
try:
member = collection_tar.getmember(MANIFEST_FILENAME)
except KeyError:
raise AnsibleError(
"Collection at '{path!s}' does not contain the "
'required file {manifest_file!s}.'.
format(
path=to_native(b_path),
manifest_file=MANIFEST_FILENAME,
),
)
with _tarfile_extract(collection_tar, member) as (_member, member_obj):
if member_obj is None:
raise AnsibleError(
'Collection tar file does not contain '
'member {member!s}'.format(member=MANIFEST_FILENAME),
)
text_content = to_text(
member_obj.read(),
errors='surrogate_or_strict',
)
try:
manifest = json.loads(text_content)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=MANIFEST_FILENAME),
)
return manifest['collection_info']
@contextmanager
def _tarfile_extract(
tar, # type: tarfile.TarFile
member, # type: tarfile.TarInfo
):
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.Optional[t.IO[bytes]]]]
tar_obj = tar.extractfile(member)
try:
yield member, tar_obj
finally:
if tar_obj is not None:
tar_obj.close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,170 |
Extend use of `retry_with_delays_and_condition` within Galaxy API requests to retry on `TimeoutError`
|
### Summary
As of now, we only retry galaxy API requests when they result in error codes defined within `RETRY_HTTP_ERROR_CODES`.
Some times there are also transient timeout errors, that are not represented by these status codes, and instead raise a `TimeoutError`.
Evaluate extending the function used by `should_retry_error`, to also retry on `TimeoutError`.
### Issue Type
Bug Report
### Component Name
lib/ansible/galaxy/api.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
N/A
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80170
|
https://github.com/ansible/ansible/pull/80180
|
cba395243454b0a959edea20425618fe7b9be775
|
2ae013667ef226635fe521be886efd1bf58cd46f
| 2023-03-08T18:32:42Z |
python
| 2023-03-22T16:04:56Z |
lib/ansible/module_utils/api.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright: (c) 2015, Brian Coca, <[email protected]>
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
"""
This module adds shared support for generic api modules
In order to use this module, include it as part of a custom
module as shown below.
The 'api' module provides the following common argument specs:
* rate limit spec
- rate: number of requests per time unit (int)
- rate_limit: time window in which the limit is applied in seconds
* retry spec
- retries: number of attempts
- retry_pause: delay between attempts in seconds
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import functools
import random
import sys
import time
def rate_limit_argument_spec(spec=None):
"""Creates an argument spec for working with rate limiting"""
arg_spec = (dict(
rate=dict(type='int'),
rate_limit=dict(type='int'),
))
if spec:
arg_spec.update(spec)
return arg_spec
def retry_argument_spec(spec=None):
"""Creates an argument spec for working with retrying"""
arg_spec = (dict(
retries=dict(type='int'),
retry_pause=dict(type='float', default=1),
))
if spec:
arg_spec.update(spec)
return arg_spec
def basic_auth_argument_spec(spec=None):
arg_spec = (dict(
api_username=dict(type='str'),
api_password=dict(type='str', no_log=True),
api_url=dict(type='str'),
validate_certs=dict(type='bool', default=True)
))
if spec:
arg_spec.update(spec)
return arg_spec
def rate_limit(rate=None, rate_limit=None):
"""rate limiting decorator"""
minrate = None
if rate is not None and rate_limit is not None:
minrate = float(rate_limit) / float(rate)
def wrapper(f):
last = [0.0]
def ratelimited(*args, **kwargs):
if sys.version_info >= (3, 8):
real_time = time.process_time
else:
real_time = time.clock
if minrate is not None:
elapsed = real_time() - last[0]
left = minrate - elapsed
if left > 0:
time.sleep(left)
last[0] = real_time()
ret = f(*args, **kwargs)
return ret
return ratelimited
return wrapper
def retry(retries=None, retry_pause=1):
"""Retry decorator"""
def wrapper(f):
def retried(*args, **kwargs):
retry_count = 0
if retries is not None:
ret = None
while True:
retry_count += 1
if retry_count >= retries:
raise Exception("Retry limit exceeded: %d" % retries)
try:
ret = f(*args, **kwargs)
except Exception:
pass
if ret:
break
time.sleep(retry_pause)
return ret
return retried
return wrapper
def generate_jittered_backoff(retries=10, delay_base=3, delay_threshold=60):
"""The "Full Jitter" backoff strategy.
Ref: https://www.awsarchitectureblog.com/2015/03/backoff.html
:param retries: The number of delays to generate.
:param delay_base: The base time in seconds used to calculate the exponential backoff.
:param delay_threshold: The maximum time in seconds for any delay.
"""
for retry in range(0, retries):
yield random.randint(0, min(delay_threshold, delay_base * 2 ** retry))
def retry_never(exception_or_result):
return False
def retry_with_delays_and_condition(backoff_iterator, should_retry_error=None):
"""Generic retry decorator.
:param backoff_iterator: An iterable of delays in seconds.
:param should_retry_error: A callable that takes an exception of the decorated function and decides whether to retry or not (returns a bool).
"""
if should_retry_error is None:
should_retry_error = retry_never
def function_wrapper(function):
@functools.wraps(function)
def run_function(*args, **kwargs):
"""This assumes the function has not already been called.
If backoff_iterator is empty, we should still run the function a single time with no delay.
"""
call_retryable_function = functools.partial(function, *args, **kwargs)
for delay in backoff_iterator:
try:
return call_retryable_function()
except Exception as e:
if not should_retry_error(e):
raise
time.sleep(delay)
# Only or final attempt
return call_retryable_function()
return run_function
return function_wrapper
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,252 |
ansible.builtin.password fail with an unhandled exception when using encrypt=bcrypt
|
### Summary
The first execution of ```lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt')``` creates the text file.
The second execution fail with ```An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt```.
### Issue Type
Bug Report
### Component Name
ansible.builtin.password
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Arch Linux
### Steps to Reproduce
Run the following command 2 times:
```shell
ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
```
Example output:
```shell
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"msg": "$2b$12$UYPgwPMJVaBFMU9ext22n.LxxvQFDFNYvLWshdEC7jRkSPTu1.VyK"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b ident=2b
```
### Expected Results
The second run should works.
### Actual Results
```console
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost -vvvv
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
exception during Jinja2 execution: Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/ansible/template/__init__.py", line 831, in _lookup
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
File "/usr/lib/python3.10/site-packages/ansible/plugins/lookup/password.py", line 384, in run
password = do_encrypt(plaintext_password, encrypt, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 272, in do_encrypt
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 265, in passlib_or_crypt
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 191, in hash
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 244, in _hash
result = self.crypt_algo.using(**settings).hash(secret)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 455, in using
subcls = super(TruncateMixin, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1137, in using
subcls = super(HasManyIdents, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1653, in using
subcls = super(HasRounds, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1350, in using
salt = subcls._norm_salt(salt, relaxed=relaxed)
File "/usr/lib/python3.10/site-packages/passlib/handlers/bcrypt.py", line 237, in _norm_salt
salt = super(_BcryptCommon, cls)._norm_salt(salt, **kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1459, in _norm_salt
raise ValueError("invalid characters in %s salt" % cls.name)
ValueError: invalid characters in bcrypt salt
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80252
|
https://github.com/ansible/ansible/pull/80251
|
016b7f71b10539c90ddbb3246f19f9cbf0e65428
|
0fd88717c953b92ed8a50495d55e630eb5d59166
| 2023-03-17T19:24:21Z |
python
| 2023-03-27T14:22:18Z |
lib/ansible/plugins/lookup/password.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2013, Javier Candeira <[email protected]>
# (c) 2013, Maykel Moya <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: password
version_added: "1.1"
author:
- Daniel Hokka Zakrisson (!UNKNOWN) <[email protected]>
- Javier Candeira (!UNKNOWN) <[email protected]>
- Maykel Moya (!UNKNOWN) <[email protected]>
short_description: retrieve or generate a random password, stored in a file
description:
- Generates a random plaintext password and stores it in a file at a given filepath.
- If the file exists previously, it will retrieve its contents, behaving just like with_file.
- 'Usage of variables like C("{{ inventory_hostname }}") in the filepath can be used to set up random passwords per host,
which simplifies password management in C("host_vars") variables.'
- A special case is using /dev/null as a path. The password lookup will generate a new random password each time,
but will not write it to /dev/null. This can be used when you need a password without storing it on the controller.
options:
_terms:
description:
- path to the file that stores/will store the passwords
required: True
encrypt:
description:
- Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash; md5_crypt, bcrypt, sha256_crypt, sha512_crypt).
- If not provided, the password will be returned in plain text.
- Note that the password is always stored as plain text, only the returning password is encrypted.
- Encrypt also forces saving the salt value for idempotence.
- Note that before 2.6 this option was incorrectly labeled as a boolean for a long time.
ident:
description:
- Specify version of Bcrypt algorithm to be used while using C(encrypt) as C(bcrypt).
- The parameter is only available for C(bcrypt) - U(https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt).
- Other hash types will simply ignore this parameter.
- 'Valid values for this parameter are: C(2), C(2a), C(2y), C(2b).'
type: string
version_added: "2.12"
chars:
version_added: "1.4"
description:
- A list of names that compose a custom character set in the generated passwords.
- 'By default generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9, and punctuation (". , : - _").'
- "They can be either parts of Python's string module attributes or represented literally ( :, -)."
- "Though string modules can vary by Python version, valid values for both major releases include:
'ascii_lowercase', 'ascii_uppercase', 'digits', 'hexdigits', 'octdigits', 'printable', 'punctuation' and 'whitespace'."
- Be aware that Python's 'hexdigits' includes lower and upper case versions of a-f, so it is not a good choice as it doubles
the chances of those values for systems that won't distinguish case, distorting the expected entropy.
- "when using a comma separated string, to enter comma use two commas ',,' somewhere - preferably at the end.
Quotes and double quotes are not supported."
type: list
elements: str
default: ['ascii_letters', 'digits', ".,:-_"]
length:
description: The length of the generated password.
default: 20
type: integer
seed:
version_added: "2.12"
description:
- A seed to initialize the random number generator.
- Identical seeds will yield identical passwords.
- Use this for random-but-idempotent password generation.
type: str
notes:
- A great alternative to the password lookup plugin,
if you don't need to generate random passwords on a per-host basis,
would be to use Vault in playbooks.
Read the documentation there and consider using it first,
it will be more desirable for most applications.
- If the file already exists, no data will be written to it.
If the file has contents, those contents will be read in as the password.
Empty files cause the password to return as an empty string.
- 'As all lookups, this runs on the Ansible host as the user running the playbook, and "become" does not apply,
the target file must be readable by the playbook user, or, if it does not exist,
the playbook user must have sufficient privileges to create it.
(So, for example, attempts to write into areas such as /etc will fail unless the entire playbook is being run as root).'
"""
EXAMPLES = """
- name: create a mysql user with a random password
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword', length=15) }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using only ascii letters
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile', chars=['ascii_letters']) }}"
priv: '{{ client }}_{{ tier }}_{{ role }}.*:ALL'
- name: create a mysql user with an 8 character random password using only digits
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile', length=8, chars=['digits']) }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using many different char sets
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile', chars=['ascii_letters', 'digits', 'punctuation']) }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create lowercase 8 character name for Kubernetes pod name
ansible.builtin.set_fact:
random_pod_name: "web-{{ lookup('ansible.builtin.password', '/dev/null', chars=['ascii_lowercase', 'digits'], length=8) }}"
- name: create random but idempotent password
ansible.builtin.set_fact:
password: "{{ lookup('ansible.builtin.password', '/dev/null', seed=inventory_hostname) }}"
"""
RETURN = """
_raw:
description:
- a password
type: list
elements: str
"""
import os
import string
import time
import hashlib
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import string_types
from ansible.parsing.splitter import parse_kv
from ansible.plugins.lookup import LookupBase
from ansible.utils.encrypt import BaseHash, do_encrypt, random_password, random_salt
from ansible.utils.path import makedirs_safe
VALID_PARAMS = frozenset(('length', 'encrypt', 'chars', 'ident', 'seed'))
def _read_password_file(b_path):
"""Read the contents of a password file and return it
:arg b_path: A byte string containing the path to the password file
:returns: a text string containing the contents of the password file or
None if no password file was present.
"""
content = None
if os.path.exists(b_path):
with open(b_path, 'rb') as f:
b_content = f.read().rstrip()
content = to_text(b_content, errors='surrogate_or_strict')
return content
def _gen_candidate_chars(characters):
'''Generate a string containing all valid chars as defined by ``characters``
:arg characters: A list of character specs. The character specs are
shorthand names for sets of characters like 'digits', 'ascii_letters',
or 'punctuation' or a string to be included verbatim.
The values of each char spec can be:
* a name of an attribute in the 'strings' module ('digits' for example).
The value of the attribute will be added to the candidate chars.
* a string of characters. If the string isn't an attribute in 'string'
module, the string will be directly added to the candidate chars.
For example::
characters=['digits', '?|']``
will match ``string.digits`` and add all ascii digits. ``'?|'`` will add
the question mark and pipe characters directly. Return will be the string::
u'0123456789?|'
'''
chars = []
for chars_spec in characters:
# getattr from string expands things like "ascii_letters" and "digits"
# into a set of characters.
chars.append(to_text(getattr(string, to_native(chars_spec), chars_spec), errors='strict'))
chars = u''.join(chars).replace(u'"', u'').replace(u"'", u'')
return chars
def _parse_content(content):
'''parse our password data format into password and salt
:arg content: The data read from the file
:returns: password and salt
'''
password = content
salt = None
salt_slug = u' salt='
try:
sep = content.rindex(salt_slug)
except ValueError:
# No salt
pass
else:
salt = password[sep + len(salt_slug):]
password = content[:sep]
return password, salt
def _format_content(password, salt, encrypt=None, ident=None):
"""Format the password and salt for saving
:arg password: the plaintext password to save
:arg salt: the salt to use when encrypting a password
:arg encrypt: Which method the user requests that this password is encrypted.
Note that the password is saved in clear. Encrypt just tells us if we
must save the salt value for idempotence. Defaults to None.
:arg ident: Which version of BCrypt algorithm to be used.
Valid only if value of encrypt is bcrypt.
Defaults to None.
:returns: a text string containing the formatted information
.. warning:: Passwords are saved in clear. This is because the playbooks
expect to get cleartext passwords from this lookup.
"""
if not encrypt and not salt:
return password
# At this point, the calling code should have assured us that there is a salt value.
if not salt:
raise AnsibleAssertionError('_format_content was called with encryption requested but no salt value')
if ident:
return u'%s salt=%s ident=%s' % (password, salt, ident)
return u'%s salt=%s' % (password, salt)
def _write_password_file(b_path, content):
b_pathdir = os.path.dirname(b_path)
makedirs_safe(b_pathdir, mode=0o700)
with open(b_path, 'wb') as f:
os.chmod(b_path, 0o600)
b_content = to_bytes(content, errors='surrogate_or_strict') + b'\n'
f.write(b_content)
def _get_lock(b_path):
"""Get the lock for writing password file."""
first_process = False
b_pathdir = os.path.dirname(b_path)
lockfile_name = to_bytes("%s.ansible_lockfile" % hashlib.sha1(b_path).hexdigest())
lockfile = os.path.join(b_pathdir, lockfile_name)
if not os.path.exists(lockfile) and b_path != to_bytes('/dev/null'):
try:
makedirs_safe(b_pathdir, mode=0o700)
fd = os.open(lockfile, os.O_CREAT | os.O_EXCL)
os.close(fd)
first_process = True
except OSError as e:
if e.strerror != 'File exists':
raise
counter = 0
# if the lock is got by other process, wait until it's released
while os.path.exists(lockfile) and not first_process:
time.sleep(2 ** counter)
if counter >= 2:
raise AnsibleError("Password lookup cannot get the lock in 7 seconds, abort..."
"This may caused by un-removed lockfile"
"you can manually remove it from controller machine at %s and try again" % lockfile)
counter += 1
return first_process, lockfile
def _release_lock(lockfile):
"""Release the lock so other processes can read the password file."""
if os.path.exists(lockfile):
os.remove(lockfile)
class LookupModule(LookupBase):
def _parse_parameters(self, term):
"""Hacky parsing of params
See https://github.com/ansible/ansible-modules-core/issues/1968#issuecomment-136842156
and the first_found lookup For how we want to fix this later
"""
first_split = term.split(' ', 1)
if len(first_split) <= 1:
# Only a single argument given, therefore it's a path
relpath = term
params = dict()
else:
relpath = first_split[0]
params = parse_kv(first_split[1])
if '_raw_params' in params:
# Spaces in the path?
relpath = u' '.join((relpath, params['_raw_params']))
del params['_raw_params']
# Check that we parsed the params correctly
if not term.startswith(relpath):
# Likely, the user had a non parameter following a parameter.
# Reject this as a user typo
raise AnsibleError('Unrecognized value after key=value parameters given to password lookup')
# No _raw_params means we already found the complete path when
# we split it initially
# Check for invalid parameters. Probably a user typo
invalid_params = frozenset(params.keys()).difference(VALID_PARAMS)
if invalid_params:
raise AnsibleError('Unrecognized parameter(s) given to password lookup: %s' % ', '.join(invalid_params))
# Set defaults
params['length'] = int(params.get('length', self.get_option('length')))
params['encrypt'] = params.get('encrypt', self.get_option('encrypt'))
params['ident'] = params.get('ident', self.get_option('ident'))
params['seed'] = params.get('seed', self.get_option('seed'))
params['chars'] = params.get('chars', self.get_option('chars'))
if params['chars'] and isinstance(params['chars'], string_types):
tmp_chars = []
if u',,' in params['chars']:
tmp_chars.append(u',')
tmp_chars.extend(c for c in params['chars'].replace(u',,', u',').split(u',') if c)
params['chars'] = tmp_chars
return relpath, params
def run(self, terms, variables, **kwargs):
ret = []
self.set_options(var_options=variables, direct=kwargs)
for term in terms:
relpath, params = self._parse_parameters(term)
path = self._loader.path_dwim(relpath)
b_path = to_bytes(path, errors='surrogate_or_strict')
chars = _gen_candidate_chars(params['chars'])
changed = None
# make sure only one process finishes all the job first
first_process, lockfile = _get_lock(b_path)
content = _read_password_file(b_path)
if content is None or b_path == to_bytes('/dev/null'):
plaintext_password = random_password(params['length'], chars, params['seed'])
salt = None
changed = True
else:
plaintext_password, salt = _parse_content(content)
encrypt = params['encrypt']
if encrypt and not salt:
changed = True
try:
salt = random_salt(BaseHash.algorithms[encrypt].salt_size)
except KeyError:
salt = random_salt()
ident = params['ident']
if encrypt and not ident:
try:
ident = BaseHash.algorithms[encrypt].implicit_ident
except KeyError:
ident = None
if ident:
changed = True
if changed and b_path != to_bytes('/dev/null'):
content = _format_content(plaintext_password, salt, encrypt=encrypt, ident=ident)
_write_password_file(b_path, content)
if first_process:
# let other processes continue
_release_lock(lockfile)
if encrypt:
password = do_encrypt(plaintext_password, encrypt, salt=salt, ident=ident)
ret.append(password)
else:
ret.append(plaintext_password)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,252 |
ansible.builtin.password fail with an unhandled exception when using encrypt=bcrypt
|
### Summary
The first execution of ```lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt')``` creates the text file.
The second execution fail with ```An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt```.
### Issue Type
Bug Report
### Component Name
ansible.builtin.password
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Arch Linux
### Steps to Reproduce
Run the following command 2 times:
```shell
ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
```
Example output:
```shell
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"msg": "$2b$12$UYPgwPMJVaBFMU9ext22n.LxxvQFDFNYvLWshdEC7jRkSPTu1.VyK"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b ident=2b
```
### Expected Results
The second run should works.
### Actual Results
```console
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost -vvvv
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
exception during Jinja2 execution: Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/ansible/template/__init__.py", line 831, in _lookup
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
File "/usr/lib/python3.10/site-packages/ansible/plugins/lookup/password.py", line 384, in run
password = do_encrypt(plaintext_password, encrypt, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 272, in do_encrypt
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 265, in passlib_or_crypt
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 191, in hash
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 244, in _hash
result = self.crypt_algo.using(**settings).hash(secret)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 455, in using
subcls = super(TruncateMixin, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1137, in using
subcls = super(HasManyIdents, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1653, in using
subcls = super(HasRounds, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1350, in using
salt = subcls._norm_salt(salt, relaxed=relaxed)
File "/usr/lib/python3.10/site-packages/passlib/handlers/bcrypt.py", line 237, in _norm_salt
salt = super(_BcryptCommon, cls)._norm_salt(salt, **kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1459, in _norm_salt
raise ValueError("invalid characters in %s salt" % cls.name)
ValueError: invalid characters in bcrypt salt
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80252
|
https://github.com/ansible/ansible/pull/80251
|
016b7f71b10539c90ddbb3246f19f9cbf0e65428
|
0fd88717c953b92ed8a50495d55e630eb5d59166
| 2023-03-17T19:24:21Z |
python
| 2023-03-27T14:22:18Z |
lib/ansible/utils/encrypt.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import multiprocessing
import random
import re
import string
import sys
from collections import namedtuple
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils.six import text_type
from ansible.module_utils._text import to_text, to_bytes
from ansible.utils.display import Display
PASSLIB_E = CRYPT_E = None
HAS_CRYPT = PASSLIB_AVAILABLE = False
try:
import passlib
import passlib.hash
from passlib.utils.handlers import HasRawSalt, PrefixWrapper
try:
from passlib.utils.binary import bcrypt64
except ImportError:
from passlib.utils import bcrypt64
PASSLIB_AVAILABLE = True
except Exception as e:
PASSLIB_E = e
try:
import crypt
HAS_CRYPT = True
except Exception as e:
CRYPT_E = e
display = Display()
__all__ = ['do_encrypt']
_LOCK = multiprocessing.Lock()
DEFAULT_PASSWORD_LENGTH = 20
def random_password(length=DEFAULT_PASSWORD_LENGTH, chars=C.DEFAULT_PASSWORD_CHARS, seed=None):
'''Return a random password string of length containing only chars
:kwarg length: The number of characters in the new password. Defaults to 20.
:kwarg chars: The characters to choose from. The default is all ascii
letters, ascii digits, and these symbols ``.,:-_``
'''
if not isinstance(chars, text_type):
raise AnsibleAssertionError('%s (%s) is not a text_type' % (chars, type(chars)))
if seed is None:
random_generator = random.SystemRandom()
else:
random_generator = random.Random(seed)
return u''.join(random_generator.choice(chars) for dummy in range(length))
def random_salt(length=8):
"""Return a text string suitable for use as a salt for the hash functions we use to encrypt passwords.
"""
# Note passlib salt values must be pure ascii so we can't let the user
# configure this
salt_chars = string.ascii_letters + string.digits + u'./'
return random_password(length=length, chars=salt_chars)
class BaseHash(object):
algo = namedtuple('algo', ['crypt_id', 'salt_size', 'implicit_rounds', 'salt_exact', 'implicit_ident'])
algorithms = {
'md5_crypt': algo(crypt_id='1', salt_size=8, implicit_rounds=None, salt_exact=False, implicit_ident=None),
'bcrypt': algo(crypt_id='2b', salt_size=22, implicit_rounds=12, salt_exact=True, implicit_ident='2b'),
'sha256_crypt': algo(crypt_id='5', salt_size=16, implicit_rounds=535000, salt_exact=False, implicit_ident=None),
'sha512_crypt': algo(crypt_id='6', salt_size=16, implicit_rounds=656000, salt_exact=False, implicit_ident=None),
}
def __init__(self, algorithm):
self.algorithm = algorithm
class CryptHash(BaseHash):
def __init__(self, algorithm):
super(CryptHash, self).__init__(algorithm)
if not HAS_CRYPT:
raise AnsibleError("crypt.crypt cannot be used as the 'crypt' python library is not installed or is unusable.", orig_exc=CRYPT_E)
if sys.platform.startswith('darwin'):
raise AnsibleError("crypt.crypt not supported on Mac OS X/Darwin, install passlib python module")
if algorithm not in self.algorithms:
raise AnsibleError("crypt.crypt does not support '%s' algorithm" % self.algorithm)
display.deprecated(
"Encryption using the Python crypt module is deprecated. The "
"Python crypt module is deprecated and will be removed from "
"Python 3.13. Install the passlib library for continued "
"encryption functionality.",
version=2.17
)
self.algo_data = self.algorithms[algorithm]
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._salt(salt, salt_size)
rounds = self._rounds(rounds)
ident = self._ident(ident)
return self._hash(secret, salt, rounds, ident)
def _salt(self, salt, salt_size):
salt_size = salt_size or self.algo_data.salt_size
ret = salt or random_salt(salt_size)
if re.search(r'[^./0-9A-Za-z]', ret):
raise AnsibleError("invalid characters in salt")
if self.algo_data.salt_exact and len(ret) != self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
elif not self.algo_data.salt_exact and len(ret) > self.algo_data.salt_size:
raise AnsibleError("invalid salt size")
return ret
def _rounds(self, rounds):
if rounds == self.algo_data.implicit_rounds:
# Passlib does not include the rounds if it is the same as implicit_rounds.
# Make crypt lib behave the same, by not explicitly specifying the rounds in that case.
return None
else:
return rounds
def _ident(self, ident):
if not ident:
return self.algo_data.crypt_id
if self.algorithm == 'bcrypt':
return ident
return None
def _hash(self, secret, salt, rounds, ident):
saltstring = ""
if ident:
saltstring = "$%s" % ident
if rounds:
saltstring += "$rounds=%d" % rounds
saltstring += "$%s" % salt
# crypt.crypt on Python < 3.9 returns None if it cannot parse saltstring
# On Python >= 3.9, it throws OSError.
try:
result = crypt.crypt(secret, saltstring)
orig_exc = None
except OSError as e:
result = None
orig_exc = e
# None as result would be interpreted by the some modules (user module)
# as no password at all.
if not result:
raise AnsibleError(
"crypt.crypt does not support '%s' algorithm" % self.algorithm,
orig_exc=orig_exc,
)
return result
class PasslibHash(BaseHash):
def __init__(self, algorithm):
super(PasslibHash, self).__init__(algorithm)
if not PASSLIB_AVAILABLE:
raise AnsibleError("passlib must be installed and usable to hash with '%s'" % algorithm, orig_exc=PASSLIB_E)
try:
self.crypt_algo = getattr(passlib.hash, algorithm)
except Exception:
raise AnsibleError("passlib does not support '%s' algorithm" % algorithm)
def hash(self, secret, salt=None, salt_size=None, rounds=None, ident=None):
salt = self._clean_salt(salt)
rounds = self._clean_rounds(rounds)
ident = self._clean_ident(ident)
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
def _clean_ident(self, ident):
ret = None
if not ident:
if self.algorithm in self.algorithms:
return self.algorithms.get(self.algorithm).implicit_ident
return ret
if self.algorithm == 'bcrypt':
return ident
return ret
def _clean_salt(self, salt):
if not salt:
return None
elif issubclass(self.crypt_algo.wrapped if isinstance(self.crypt_algo, PrefixWrapper) else self.crypt_algo, HasRawSalt):
ret = to_bytes(salt, encoding='ascii', errors='strict')
else:
ret = to_text(salt, encoding='ascii', errors='strict')
# Ensure the salt has the correct padding
if self.algorithm == 'bcrypt':
ret = bcrypt64.repair_unused(ret)
return ret
def _clean_rounds(self, rounds):
algo_data = self.algorithms.get(self.algorithm)
if rounds:
return rounds
elif algo_data and algo_data.implicit_rounds:
# The default rounds used by passlib depend on the passlib version.
# For consistency ensure that passlib behaves the same as crypt in case no rounds were specified.
# Thus use the crypt defaults.
return algo_data.implicit_rounds
else:
return None
def _hash(self, secret, salt, salt_size, rounds, ident):
# Not every hash algorithm supports every parameter.
# Thus create the settings dict only with set parameters.
settings = {}
if salt:
settings['salt'] = salt
if salt_size:
settings['salt_size'] = salt_size
if rounds:
settings['rounds'] = rounds
if ident:
settings['ident'] = ident
# starting with passlib 1.7 'using' and 'hash' should be used instead of 'encrypt'
if hasattr(self.crypt_algo, 'hash'):
result = self.crypt_algo.using(**settings).hash(secret)
elif hasattr(self.crypt_algo, 'encrypt'):
result = self.crypt_algo.encrypt(secret, **settings)
else:
raise AnsibleError("installed passlib version %s not supported" % passlib.__version__)
# passlib.hash should always return something or raise an exception.
# Still ensure that there is always a result.
# Otherwise an empty password might be assumed by some modules, like the user module.
if not result:
raise AnsibleError("failed to hash with algorithm '%s'" % self.algorithm)
# Hashes from passlib.hash should be represented as ascii strings of hex
# digits so this should not traceback. If it's not representable as such
# we need to traceback and then block such algorithms because it may
# impact calling code.
return to_text(result, errors='strict')
def passlib_or_crypt(secret, algorithm, salt=None, salt_size=None, rounds=None, ident=None):
if PASSLIB_AVAILABLE:
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
if HAS_CRYPT:
return CryptHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
raise AnsibleError("Unable to encrypt nor hash, either crypt or passlib must be installed.", orig_exc=CRYPT_E)
def do_encrypt(result, encrypt, salt_size=None, salt=None, ident=None):
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,252 |
ansible.builtin.password fail with an unhandled exception when using encrypt=bcrypt
|
### Summary
The first execution of ```lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt')``` creates the text file.
The second execution fail with ```An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt```.
### Issue Type
Bug Report
### Component Name
ansible.builtin.password
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Arch Linux
### Steps to Reproduce
Run the following command 2 times:
```shell
ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
```
Example output:
```shell
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"msg": "$2b$12$UYPgwPMJVaBFMU9ext22n.LxxvQFDFNYvLWshdEC7jRkSPTu1.VyK"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
$ cat password.txt
z2fH1h5k.J1Oy6phsP73 salt=UYPgwPMJVaBFMU9ext22n/ ident=2b ident=2b
```
### Expected Results
The second run should works.
### Actual Results
```console
$ ansible -m debug -a "msg={{lookup('ansible.builtin.password', 'password.txt encrypt=bcrypt') }}" localhost -vvvv
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/eagle/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/eagle/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
exception during Jinja2 execution: Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/ansible/template/__init__.py", line 831, in _lookup
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
File "/usr/lib/python3.10/site-packages/ansible/plugins/lookup/password.py", line 384, in run
password = do_encrypt(plaintext_password, encrypt, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 272, in do_encrypt
return passlib_or_crypt(result, encrypt, salt_size=salt_size, salt=salt, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 265, in passlib_or_crypt
return PasslibHash(algorithm).hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 191, in hash
return self._hash(secret, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
File "/usr/lib/python3.10/site-packages/ansible/utils/encrypt.py", line 244, in _hash
result = self.crypt_algo.using(**settings).hash(secret)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 455, in using
subcls = super(TruncateMixin, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1137, in using
subcls = super(HasManyIdents, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1653, in using
subcls = super(HasRounds, cls).using(**kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1350, in using
salt = subcls._norm_salt(salt, relaxed=relaxed)
File "/usr/lib/python3.10/site-packages/passlib/handlers/bcrypt.py", line 237, in _norm_salt
salt = super(_BcryptCommon, cls)._norm_salt(salt, **kwds)
File "/usr/lib/python3.10/site-packages/passlib/utils/handlers.py", line 1459, in _norm_salt
raise ValueError("invalid characters in %s salt" % cls.name)
ValueError: invalid characters in bcrypt salt
localhost | FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'ansible.builtin.password'. Error was a <class 'ValueError'>, original message: invalid characters in bcrypt salt. invalid characters in bcrypt salt"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80252
|
https://github.com/ansible/ansible/pull/80251
|
016b7f71b10539c90ddbb3246f19f9cbf0e65428
|
0fd88717c953b92ed8a50495d55e630eb5d59166
| 2023-03-17T19:24:21Z |
python
| 2023-03-27T14:22:18Z |
test/units/plugins/lookup/test_password.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
try:
import passlib
from passlib.handlers import pbkdf2
except ImportError:
passlib = None
pbkdf2 = None
import pytest
from units.mock.loader import DictDataLoader
from units.compat import unittest
from unittest.mock import mock_open, patch
from ansible.errors import AnsibleError
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import builtins
from ansible.module_utils._text import to_bytes
from ansible.plugins.loader import PluginLoader, lookup_loader
from ansible.plugins.lookup import password
DEFAULT_LENGTH = 20
DEFAULT_CHARS = sorted([u'ascii_letters', u'digits', u".,:-_"])
DEFAULT_CANDIDATE_CHARS = u'.,:-_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
# Currently there isn't a new-style
old_style_params_data = (
# Simple case
dict(
term=u'/path/to/file',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Special characters in path
dict(
term=u'/path/with/embedded spaces and/file',
filename=u'/path/with/embedded spaces and/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/equals/cn=com.ansible',
filename=u'/path/with/equals/cn=com.ansible',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/unicode/γγγ¨γΏ/file',
filename=u'/path/with/unicode/γγγ¨γΏ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Mix several special chars
dict(
term=u'/path/with/utf 8 and spaces/γγγ¨γΏ/file',
filename=u'/path/with/utf 8 and spaces/γγγ¨γΏ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/encoding=unicode/γγγ¨γΏ/file',
filename=u'/path/with/encoding=unicode/γγγ¨γΏ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/encoding=unicode/γγγ¨γΏ/and spaces file',
filename=u'/path/with/encoding=unicode/γγγ¨γΏ/and spaces file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Simple parameters
dict(
term=u'/path/to/file length=42',
filename=u'/path/to/file',
params=dict(length=42, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/to/file encrypt=pbkdf2_sha256',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt='pbkdf2_sha256', ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/to/file chars=abcdefghijklmnop',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abcdefghijklmnop'], seed=None),
candidate_chars=u'abcdefghijklmnop',
),
dict(
term=u'/path/to/file chars=digits,abc,def',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'abc', u'def']), seed=None),
candidate_chars=u'abcdef0123456789',
),
dict(
term=u'/path/to/file seed=1',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed='1'),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Including comma in chars
dict(
term=u'/path/to/file chars=abcdefghijklmnop,,digits',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'abcdefghijklmnop', u',', u'digits']), seed=None),
candidate_chars=u',abcdefghijklmnop0123456789',
),
dict(
term=u'/path/to/file chars=,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=[u','], seed=None),
candidate_chars=u',',
),
# Including = in chars
dict(
term=u'/path/to/file chars=digits,=,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'=', u',']), seed=None),
candidate_chars=u',=0123456789',
),
dict(
term=u'/path/to/file chars=digits,abc=def',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'abc=def']), seed=None),
candidate_chars=u'abc=def0123456789',
),
# Including unicode in chars
dict(
term=u'/path/to/file chars=digits,γγγ¨γΏ,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'γγγ¨γΏ', u',']), seed=None),
candidate_chars=u',0123456789γγγ¨γΏ',
),
# Including only unicode in chars
dict(
term=u'/path/to/file chars=γγγ¨γΏ',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'γγγ¨γΏ']), seed=None),
candidate_chars=u'γγγ¨γΏ',
),
# Include ':' in path
dict(
term=u'/path/to/file_with:colon chars=ascii_letters,digits',
filename=u'/path/to/file_with:colon',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'ascii_letters', u'digits']), seed=None),
candidate_chars=u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789',
),
# Including special chars in both path and chars
# Special characters in path
dict(
term=u'/path/with/embedded spaces and/file chars=abc=def',
filename=u'/path/with/embedded spaces and/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abc=def'], seed=None),
candidate_chars=u'abc=def',
),
dict(
term=u'/path/with/equals/cn=com.ansible chars=abc=def',
filename=u'/path/with/equals/cn=com.ansible',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abc=def'], seed=None),
candidate_chars=u'abc=def',
),
dict(
term=u'/path/with/unicode/γγγ¨γΏ/file chars=γγγ¨γΏ',
filename=u'/path/with/unicode/γγγ¨γΏ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'γγγ¨γΏ'], seed=None),
candidate_chars=u'γγγ¨γΏ',
),
)
class TestParseParameters(unittest.TestCase):
def setUp(self):
self.fake_loader = DictDataLoader({'/path/to/somewhere': 'sdfsdf'})
self.password_lookup = lookup_loader.get('password')
self.password_lookup._loader = self.fake_loader
def test(self):
for testcase in old_style_params_data:
filename, params = self.password_lookup._parse_parameters(testcase['term'])
params['chars'].sort()
self.assertEqual(filename, testcase['filename'])
self.assertEqual(params, testcase['params'])
def test_unrecognized_value(self):
testcase = dict(term=u'/path/to/file chars=γγγ¨γΏi sdfsdf',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, chars=[u'γγγ¨γΏ']),
candidate_chars=u'γγγ¨γΏ')
self.assertRaises(AnsibleError, self.password_lookup._parse_parameters, testcase['term'])
def test_invalid_params(self):
testcase = dict(term=u'/path/to/file chars=γγγ¨γΏi somethign_invalid=123',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, chars=[u'γγγ¨γΏ']),
candidate_chars=u'γγγ¨γΏ')
self.assertRaises(AnsibleError, self.password_lookup._parse_parameters, testcase['term'])
class TestReadPasswordFile(unittest.TestCase):
def setUp(self):
self.os_path_exists = password.os.path.exists
def tearDown(self):
password.os.path.exists = self.os_path_exists
def test_no_password_file(self):
password.os.path.exists = lambda x: False
self.assertEqual(password._read_password_file(b'/nonexistent'), None)
def test_with_password_file(self):
password.os.path.exists = lambda x: True
with patch.object(builtins, 'open', mock_open(read_data=b'Testing\n')) as m:
self.assertEqual(password._read_password_file(b'/etc/motd'), u'Testing')
class TestGenCandidateChars(unittest.TestCase):
def _assert_gen_candidate_chars(self, testcase):
expected_candidate_chars = testcase['candidate_chars']
params = testcase['params']
chars_spec = params['chars']
res = password._gen_candidate_chars(chars_spec)
self.assertEqual(res, expected_candidate_chars)
def test_gen_candidate_chars(self):
for testcase in old_style_params_data:
self._assert_gen_candidate_chars(testcase)
class TestRandomPassword(unittest.TestCase):
def _assert_valid_chars(self, res, chars):
for res_char in res:
self.assertIn(res_char, chars)
def test_default(self):
res = password.random_password()
self.assertEqual(len(res), DEFAULT_LENGTH)
self.assertTrue(isinstance(res, text_type))
self._assert_valid_chars(res, DEFAULT_CANDIDATE_CHARS)
def test_zero_length(self):
res = password.random_password(length=0)
self.assertEqual(len(res), 0)
self.assertTrue(isinstance(res, text_type))
self._assert_valid_chars(res, u',')
def test_just_a_common(self):
res = password.random_password(length=1, chars=u',')
self.assertEqual(len(res), 1)
self.assertEqual(res, u',')
def test_free_will(self):
# A Rush and Spinal Tap reference twofer
res = password.random_password(length=11, chars=u'a')
self.assertEqual(len(res), 11)
self.assertEqual(res, 'aaaaaaaaaaa')
self._assert_valid_chars(res, u'a')
def test_unicode(self):
res = password.random_password(length=11, chars=u'γγγ¨γΏ')
self._assert_valid_chars(res, u'γγγ¨γΏ')
self.assertEqual(len(res), 11)
def test_seed(self):
pw1 = password.random_password(seed=1)
pw2 = password.random_password(seed=1)
pw3 = password.random_password(seed=2)
self.assertEqual(pw1, pw2)
self.assertNotEqual(pw1, pw3)
def test_gen_password(self):
for testcase in old_style_params_data:
params = testcase['params']
candidate_chars = testcase['candidate_chars']
params_chars_spec = password._gen_candidate_chars(params['chars'])
password_string = password.random_password(length=params['length'],
chars=params_chars_spec)
self.assertEqual(len(password_string),
params['length'],
msg='generated password=%s has length (%s) instead of expected length (%s)' %
(password_string, len(password_string), params['length']))
for char in password_string:
self.assertIn(char, candidate_chars,
msg='%s not found in %s from chars spect %s' %
(char, candidate_chars, params['chars']))
class TestParseContent(unittest.TestCase):
def test_empty_password_file(self):
plaintext_password, salt = password._parse_content(u'')
self.assertEqual(plaintext_password, u'')
self.assertEqual(salt, None)
def test(self):
expected_content = u'12345678'
file_content = expected_content
plaintext_password, salt = password._parse_content(file_content)
self.assertEqual(plaintext_password, expected_content)
self.assertEqual(salt, None)
def test_with_salt(self):
expected_content = u'12345678 salt=87654321'
file_content = expected_content
plaintext_password, salt = password._parse_content(file_content)
self.assertEqual(plaintext_password, u'12345678')
self.assertEqual(salt, u'87654321')
class TestFormatContent(unittest.TestCase):
def test_no_encrypt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=u'87654321',
encrypt=False),
u'hunter42 salt=87654321')
def test_no_encrypt_no_salt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=None,
encrypt=None),
u'hunter42')
def test_encrypt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=u'87654321',
encrypt='pbkdf2_sha256'),
u'hunter42 salt=87654321')
def test_encrypt_no_salt(self):
self.assertRaises(AssertionError, password._format_content, u'hunter42', None, 'pbkdf2_sha256')
class TestWritePasswordFile(unittest.TestCase):
def setUp(self):
self.makedirs_safe = password.makedirs_safe
self.os_chmod = password.os.chmod
password.makedirs_safe = lambda path, mode: None
password.os.chmod = lambda path, mode: None
def tearDown(self):
password.makedirs_safe = self.makedirs_safe
password.os.chmod = self.os_chmod
def test_content_written(self):
with patch.object(builtins, 'open', mock_open()) as m:
password._write_password_file(b'/this/is/a/test/caf\xc3\xa9', u'Testing CafΓ©')
m.assert_called_once_with(b'/this/is/a/test/caf\xc3\xa9', 'wb')
m().write.assert_called_once_with(u'Testing CafΓ©\n'.encode('utf-8'))
class BaseTestLookupModule(unittest.TestCase):
def setUp(self):
self.fake_loader = DictDataLoader({'/path/to/somewhere': 'sdfsdf'})
self.password_lookup = lookup_loader.get('password')
self.password_lookup._loader = self.fake_loader
self.os_path_exists = password.os.path.exists
self.os_open = password.os.open
password.os.open = lambda path, flag: None
self.os_close = password.os.close
password.os.close = lambda fd: None
self.os_remove = password.os.remove
password.os.remove = lambda path: None
self.makedirs_safe = password.makedirs_safe
password.makedirs_safe = lambda path, mode: None
def tearDown(self):
password.os.path.exists = self.os_path_exists
password.os.open = self.os_open
password.os.close = self.os_close
password.os.remove = self.os_remove
password.makedirs_safe = self.makedirs_safe
class TestLookupModuleWithoutPasslib(BaseTestLookupModule):
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_no_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere'], None)
# FIXME: assert something useful
for result in results:
assert len(result) == DEFAULT_LENGTH
assert isinstance(result, text_type)
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_password_already_created_no_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
for result in results:
self.assertEqual(result, u'hunter42')
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_only_a(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere chars=a'], None)
for result in results:
self.assertEqual(result, u'a' * DEFAULT_LENGTH)
@patch('time.sleep')
def test_lock_been_held(self, mock_sleep):
# pretend the lock file is here
password.os.path.exists = lambda x: True
try:
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
# should timeout here
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
self.fail("Lookup didn't timeout when lock already been held")
except AnsibleError:
pass
def test_lock_not_been_held(self):
# pretend now there is password file but no lock
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
try:
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
# should not timeout here
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
except AnsibleError:
self.fail('Lookup timeouts when lock is free')
for result in results:
self.assertEqual(result, u'hunter42')
@pytest.mark.skipif(passlib is None, reason='passlib must be installed to run these tests')
class TestLookupModuleWithPasslib(BaseTestLookupModule):
def setUp(self):
super(TestLookupModuleWithPasslib, self).setUp()
# Different releases of passlib default to a different number of rounds
self.sha256 = passlib.registry.get_crypt_handler('pbkdf2_sha256')
sha256_for_tests = pbkdf2.create_pbkdf2_hash("sha256", 32, 20000)
passlib.registry.register_crypt_handler(sha256_for_tests, force=True)
def tearDown(self):
super(TestLookupModuleWithPasslib, self).tearDown()
passlib.registry.register_crypt_handler(self.sha256, force=True)
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere encrypt=pbkdf2_sha256'], None)
# pbkdf2 format plus hash
expected_password_length = 76
for result in results:
self.assertEqual(len(result), expected_password_length)
# result should have 5 parts split by '$'
str_parts = result.split('$', 5)
# verify the result is parseable by the passlib
crypt_parts = passlib.hash.pbkdf2_sha256.parsehash(result)
# verify it used the right algo type
self.assertEqual(str_parts[1], 'pbkdf2-sha256')
self.assertEqual(len(str_parts), 5)
# verify the string and parsehash agree on the number of rounds
self.assertEqual(int(str_parts[2]), crypt_parts['rounds'])
self.assertIsInstance(result, text_type)
@patch('ansible.plugins.lookup.password._write_password_file')
def test_password_already_created_encrypt(self, mock_write_file):
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
results = self.password_lookup.run([u'/path/to/somewhere chars=anything encrypt=pbkdf2_sha256'], None)
for result in results:
self.assertEqual(result, u'$pbkdf2-sha256$20000$ODc2NTQzMjE$Uikde0cv0BKaRaAXMrUQB.zvG4GmnjClwjghwIRf2gU')
# Assert the password file is not rewritten
mock_write_file.assert_not_called()
@pytest.mark.skipif(passlib is None, reason='passlib must be installed to run these tests')
class TestLookupModuleWithPasslibWrappedAlgo(BaseTestLookupModule):
def setUp(self):
super(TestLookupModuleWithPasslibWrappedAlgo, self).setUp()
self.os_path_exists = password.os.path.exists
def tearDown(self):
super(TestLookupModuleWithPasslibWrappedAlgo, self).tearDown()
password.os.path.exists = self.os_path_exists
@patch('ansible.plugins.lookup.password._write_password_file')
def test_encrypt_wrapped_crypt_algo(self, mock_write_file):
password.os.path.exists = self.password_lookup._loader.path_exists
with patch.object(builtins, 'open', mock_open(read_data=self.password_lookup._loader._get_file_contents('/path/to/somewhere')[0])) as m:
results = self.password_lookup.run([u'/path/to/somewhere encrypt=ldap_sha256_crypt'], None)
wrapper = getattr(passlib.hash, 'ldap_sha256_crypt')
self.assertEqual(len(results), 1)
result = results[0]
self.assertIsInstance(result, text_type)
expected_password_length = 76
self.assertEqual(len(result), expected_password_length)
# result should have 5 parts split by '$'
str_parts = result.split('$')
self.assertEqual(len(str_parts), 5)
# verify the string and passlib agree on the number of rounds
self.assertEqual(str_parts[2], "rounds=%s" % wrapper.default_rounds)
# verify it used the right algo type
self.assertEqual(str_parts[0], '{CRYPT}')
# verify it used the right algo type
self.assertTrue(wrapper.verify(self.password_lookup._loader._get_file_contents('/path/to/somewhere')[0], result))
# verify a password with a non default rounds value
# generated with: echo test | mkpasswd -s --rounds 660000 -m sha-256 --salt testansiblepass.
hashpw = '{CRYPT}$5$rounds=660000$testansiblepass.$KlRSdA3iFXoPI.dEwh7AixiXW3EtCkLrlQvlYA2sluD'
self.assertTrue(wrapper.verify('test', hashpw))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,294 |
Windows: ansible-moduletmp directory needs more randomize string to avoid conflicts
|
### Summary
Due to the limitation of C# Random() function, the directory name for a moduletmp directory is generated the same string when multiple Ansible connections run the module execution at the same time.
source code in devel:
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/csharp/Ansible.Basic.cs#L179
C# reference:
https://learn.microsoft.com/en-us/dotnet/api/system.random?view=net-7.0
```
On most Windows systems, Random objects created within 15 milliseconds of one another are likely to have identical seed values.
```
### Issue Type
Bug Report
### Component Name
Ansible.Basic.cs
And all windows related modules
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Jun 14 2022, 17:49:07) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 2.11.3
libyaml = True
```
I believe the issue can be reproduced with core 2.13 and 2.14.
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ansible Automation Platform 2.1 (ansible-core 2.12.x)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Run some task like `win_ping` at the same time to the same Windows managed host.
### Expected Results
The multiple job executions can work separately as expected.
### Actual Results
Here is a fragment of the job output from the customer.
```console
"msg": "Unhandled exception while executing module: Exception calling \"ExitJson\" with \"0\" argument(s): \"Could not find a part of the path 'C:\\Users\\XXXXXXXX\\AppData\\Local\\Temp\\ansible-moduletmp-133151580377544360-1777981702'.\""}}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80294
|
https://github.com/ansible/ansible/pull/80328
|
8600a1b92769f06f2645a0d45e70a15f27ddebdc
|
fb6b90fe4255e9995706905e2a9cde205648c0d2
| 2023-03-24T04:19:45Z |
python
| 2023-03-28T02:25:10Z |
changelogs/fragments/ansible-basic-tmpdir-uniqueness.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,294 |
Windows: ansible-moduletmp directory needs more randomize string to avoid conflicts
|
### Summary
Due to the limitation of C# Random() function, the directory name for a moduletmp directory is generated the same string when multiple Ansible connections run the module execution at the same time.
source code in devel:
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/csharp/Ansible.Basic.cs#L179
C# reference:
https://learn.microsoft.com/en-us/dotnet/api/system.random?view=net-7.0
```
On most Windows systems, Random objects created within 15 milliseconds of one another are likely to have identical seed values.
```
### Issue Type
Bug Report
### Component Name
Ansible.Basic.cs
And all windows related modules
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Jun 14 2022, 17:49:07) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 2.11.3
libyaml = True
```
I believe the issue can be reproduced with core 2.13 and 2.14.
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ansible Automation Platform 2.1 (ansible-core 2.12.x)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
Run some task like `win_ping` at the same time to the same Windows managed host.
### Expected Results
The multiple job executions can work separately as expected.
### Actual Results
Here is a fragment of the job output from the customer.
```console
"msg": "Unhandled exception while executing module: Exception calling \"ExitJson\" with \"0\" argument(s): \"Could not find a part of the path 'C:\\Users\\XXXXXXXX\\AppData\\Local\\Temp\\ansible-moduletmp-133151580377544360-1777981702'.\""}}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80294
|
https://github.com/ansible/ansible/pull/80328
|
8600a1b92769f06f2645a0d45e70a15f27ddebdc
|
fb6b90fe4255e9995706905e2a9cde205648c0d2
| 2023-03-24T04:19:45Z |
python
| 2023-03-28T02:25:10Z |
lib/ansible/module_utils/csharp/Ansible.Basic.cs
|
using Microsoft.Win32.SafeHandles;
using System;
using System.Collections;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Security.AccessControl;
using System.Security.Principal;
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
// Newtonsoft.Json may reference a different System.Runtime version (6.x) than loaded by PowerShell 7.3 (7.x).
// Ignore CS1701 so the code can be compiled when warnings are reported as errors.
//NoWarn -Name CS1701 -CLR Core
// System.Diagnostics.EventLog.dll reference different versioned dlls that are
// loaded in PSCore, ignore CS1702 so the code will ignore this warning
//NoWarn -Name CS1702 -CLR Core
//AssemblyReference -Type Newtonsoft.Json.JsonConvert -CLR Core
//AssemblyReference -Type System.Diagnostics.EventLog -CLR Core
//AssemblyReference -Type System.Security.AccessControl.NativeObjectSecurity -CLR Core
//AssemblyReference -Type System.Security.AccessControl.DirectorySecurity -CLR Core
//AssemblyReference -Type System.Security.Principal.IdentityReference -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
namespace Ansible.Basic
{
public class AnsibleModule
{
public delegate void ExitHandler(int rc);
public static ExitHandler Exit = new ExitHandler(ExitModule);
public delegate void WriteLineHandler(string line);
public static WriteLineHandler WriteLine = new WriteLineHandler(WriteLineModule);
public static bool _DebugArgSpec = false;
private static List<string> BOOLEANS_TRUE = new List<string>() { "y", "yes", "on", "1", "true", "t", "1.0" };
private static List<string> BOOLEANS_FALSE = new List<string>() { "n", "no", "off", "0", "false", "f", "0.0" };
private string remoteTmp = Path.GetTempPath();
private string tmpdir = null;
private HashSet<string> noLogValues = new HashSet<string>();
private List<string> optionsContext = new List<string>();
private List<string> warnings = new List<string>();
private List<Dictionary<string, string>> deprecations = new List<Dictionary<string, string>>();
private List<string> cleanupFiles = new List<string>();
private Dictionary<string, string> passVars = new Dictionary<string, string>()
{
// null values means no mapping, not used in Ansible.Basic.AnsibleModule
{ "check_mode", "CheckMode" },
{ "debug", "DebugMode" },
{ "diff", "DiffMode" },
{ "keep_remote_files", "KeepRemoteFiles" },
{ "module_name", "ModuleName" },
{ "no_log", "NoLog" },
{ "remote_tmp", "remoteTmp" },
{ "selinux_special_fs", null },
{ "shell_executable", null },
{ "socket", null },
{ "string_conversion_action", null },
{ "syslog_facility", null },
{ "tmpdir", "tmpdir" },
{ "verbosity", "Verbosity" },
{ "version", "AnsibleVersion" },
};
private List<string> passBools = new List<string>() { "check_mode", "debug", "diff", "keep_remote_files", "no_log" };
private List<string> passInts = new List<string>() { "verbosity" };
private Dictionary<string, List<object>> specDefaults = new Dictionary<string, List<object>>()
{
// key - (default, type) - null is freeform
{ "apply_defaults", new List<object>() { false, typeof(bool) } },
{ "aliases", new List<object>() { typeof(List<string>), typeof(List<string>) } },
{ "choices", new List<object>() { typeof(List<object>), typeof(List<object>) } },
{ "default", new List<object>() { null, null } },
{ "deprecated_aliases", new List<object>() { typeof(List<Hashtable>), typeof(List<Hashtable>) } },
{ "elements", new List<object>() { null, null } },
{ "mutually_exclusive", new List<object>() { typeof(List<List<string>>), typeof(List<object>) } },
{ "no_log", new List<object>() { false, typeof(bool) } },
{ "options", new List<object>() { typeof(Hashtable), typeof(Hashtable) } },
{ "removed_in_version", new List<object>() { null, typeof(string) } },
{ "removed_at_date", new List<object>() { null, typeof(DateTime) } },
{ "removed_from_collection", new List<object>() { null, typeof(string) } },
{ "required", new List<object>() { false, typeof(bool) } },
{ "required_by", new List<object>() { typeof(Hashtable), typeof(Hashtable) } },
{ "required_if", new List<object>() { typeof(List<List<object>>), typeof(List<object>) } },
{ "required_one_of", new List<object>() { typeof(List<List<string>>), typeof(List<object>) } },
{ "required_together", new List<object>() { typeof(List<List<string>>), typeof(List<object>) } },
{ "supports_check_mode", new List<object>() { false, typeof(bool) } },
{ "type", new List<object>() { "str", null } },
};
private Dictionary<string, Delegate> optionTypes = new Dictionary<string, Delegate>()
{
{ "bool", new Func<object, bool>(ParseBool) },
{ "dict", new Func<object, Dictionary<string, object>>(ParseDict) },
{ "float", new Func<object, float>(ParseFloat) },
{ "int", new Func<object, int>(ParseInt) },
{ "json", new Func<object, string>(ParseJson) },
{ "list", new Func<object, List<object>>(ParseList) },
{ "path", new Func<object, string>(ParsePath) },
{ "raw", new Func<object, object>(ParseRaw) },
{ "sid", new Func<object, SecurityIdentifier>(ParseSid) },
{ "str", new Func<object, string>(ParseStr) },
};
public Dictionary<string, object> Diff = new Dictionary<string, object>();
public IDictionary Params = null;
public Dictionary<string, object> Result = new Dictionary<string, object>() { { "changed", false } };
public bool CheckMode { get; private set; }
public bool DebugMode { get; private set; }
public bool DiffMode { get; private set; }
public bool KeepRemoteFiles { get; private set; }
public string ModuleName { get; private set; }
public bool NoLog { get; private set; }
public int Verbosity { get; private set; }
public string AnsibleVersion { get; private set; }
public string Tmpdir
{
get
{
if (tmpdir == null)
{
#if WINDOWS
SecurityIdentifier user = WindowsIdentity.GetCurrent().User;
DirectorySecurity dirSecurity = new DirectorySecurity();
dirSecurity.SetOwner(user);
dirSecurity.SetAccessRuleProtection(true, false); // disable inheritance rules
FileSystemAccessRule ace = new FileSystemAccessRule(user, FileSystemRights.FullControl,
InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit,
PropagationFlags.None, AccessControlType.Allow);
dirSecurity.AddAccessRule(ace);
string baseDir = Path.GetFullPath(Environment.ExpandEnvironmentVariables(remoteTmp));
if (!Directory.Exists(baseDir))
{
string failedMsg = null;
try
{
#if CORECLR
DirectoryInfo createdDir = Directory.CreateDirectory(baseDir);
FileSystemAclExtensions.SetAccessControl(createdDir, dirSecurity);
#else
Directory.CreateDirectory(baseDir, dirSecurity);
#endif
}
catch (Exception e)
{
failedMsg = String.Format("Failed to create base tmpdir '{0}': {1}", baseDir, e.Message);
}
if (failedMsg != null)
{
string envTmp = Path.GetTempPath();
Warn(String.Format("Unable to use '{0}' as temporary directory, falling back to system tmp '{1}': {2}", baseDir, envTmp, failedMsg));
baseDir = envTmp;
}
else
{
NTAccount currentUser = (NTAccount)user.Translate(typeof(NTAccount));
string warnMsg = String.Format("Module remote_tmp {0} did not exist and was created with FullControl to {1}, ", baseDir, currentUser.ToString());
warnMsg += "this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually";
Warn(warnMsg);
}
}
string dateTime = DateTime.Now.ToFileTime().ToString();
string dirName = String.Format("ansible-moduletmp-{0}-{1}", dateTime, new Random().Next(0, int.MaxValue));
string newTmpdir = Path.Combine(baseDir, dirName);
#if CORECLR
DirectoryInfo tmpdirInfo = Directory.CreateDirectory(newTmpdir);
FileSystemAclExtensions.SetAccessControl(tmpdirInfo, dirSecurity);
#else
Directory.CreateDirectory(newTmpdir, dirSecurity);
#endif
tmpdir = newTmpdir;
if (!KeepRemoteFiles)
cleanupFiles.Add(tmpdir);
#else
throw new NotImplementedException("Tmpdir is only supported on Windows");
#endif
}
return tmpdir;
}
}
public AnsibleModule(string[] args, IDictionary argumentSpec, IDictionary[] fragments = null)
{
// NoLog is not set yet, we cannot rely on FailJson to sanitize the output
// Do the minimum amount to get this running before we actually parse the params
Dictionary<string, string> aliases = new Dictionary<string, string>();
try
{
ValidateArgumentSpec(argumentSpec);
// Merge the fragments if present into the main arg spec.
if (fragments != null)
{
foreach (IDictionary fragment in fragments)
{
ValidateArgumentSpec(fragment);
MergeFragmentSpec(argumentSpec, fragment);
}
}
// Used by ansible-test to retrieve the module argument spec, not designed for public use.
if (_DebugArgSpec)
{
// Cannot call exit here because it will be caught with the catch (Exception e) below. Instead
// just throw a new exception with a specific message and the exception block will handle it.
ScriptBlock.Create("Set-Variable -Name ansibleTestArgSpec -Value $args[0] -Scope Global"
).Invoke(argumentSpec);
throw new Exception("ansible-test validate-modules check");
}
// Now make sure all the metadata keys are set to their defaults, this must be done after we've
// potentially output the arg spec for ansible-test.
SetArgumentSpecDefaults(argumentSpec);
Params = GetParams(args);
aliases = GetAliases(argumentSpec, Params);
SetNoLogValues(argumentSpec, Params);
}
catch (Exception e)
{
if (e.Message == "ansible-test validate-modules check")
Exit(0);
Dictionary<string, object> result = new Dictionary<string, object>
{
{ "failed", true },
{ "msg", String.Format("internal error: {0}", e.Message) },
{ "exception", e.ToString() }
};
WriteLine(ToJson(result));
Exit(1);
}
// Initialise public properties to the defaults before we parse the actual inputs
CheckMode = false;
DebugMode = false;
DiffMode = false;
KeepRemoteFiles = false;
ModuleName = "undefined win module";
NoLog = (bool)argumentSpec["no_log"];
Verbosity = 0;
AppDomain.CurrentDomain.ProcessExit += CleanupFiles;
List<string> legalInputs = passVars.Keys.Select(v => "_ansible_" + v).ToList();
legalInputs.AddRange(((IDictionary)argumentSpec["options"]).Keys.Cast<string>().ToList());
legalInputs.AddRange(aliases.Keys.Cast<string>().ToList());
CheckArguments(argumentSpec, Params, legalInputs);
// Set a Ansible friendly invocation value in the result object
Dictionary<string, object> invocation = new Dictionary<string, object>() { { "module_args", Params } };
Result["invocation"] = RemoveNoLogValues(invocation, noLogValues);
if (!NoLog)
LogEvent(String.Format("Invoked with:\r\n {0}", FormatLogData(Params, 2)), sanitise: false);
}
public static AnsibleModule Create(string[] args, IDictionary argumentSpec, IDictionary[] fragments = null)
{
return new AnsibleModule(args, argumentSpec, fragments);
}
public void Debug(string message)
{
if (DebugMode)
LogEvent(String.Format("[DEBUG] {0}", message));
}
public void Deprecate(string message, string version)
{
Deprecate(message, version, null);
}
public void Deprecate(string message, string version, string collectionName)
{
deprecations.Add(new Dictionary<string, string>() {
{ "msg", message }, { "version", version }, { "collection_name", collectionName } });
LogEvent(String.Format("[DEPRECATION WARNING] {0} {1}", message, version));
}
public void Deprecate(string message, DateTime date)
{
Deprecate(message, date, null);
}
public void Deprecate(string message, DateTime date, string collectionName)
{
string isoDate = date.ToString("yyyy-MM-dd");
deprecations.Add(new Dictionary<string, string>() {
{ "msg", message }, { "date", isoDate }, { "collection_name", collectionName } });
LogEvent(String.Format("[DEPRECATION WARNING] {0} {1}", message, isoDate));
}
public void ExitJson()
{
CleanupFiles(null, null);
WriteLine(GetFormattedResults(Result));
Exit(0);
}
public void FailJson(string message) { FailJson(message, null, null); }
public void FailJson(string message, ErrorRecord psErrorRecord) { FailJson(message, psErrorRecord, null); }
public void FailJson(string message, Exception exception) { FailJson(message, null, exception); }
private void FailJson(string message, ErrorRecord psErrorRecord, Exception exception)
{
Result["failed"] = true;
Result["msg"] = RemoveNoLogValues(message, noLogValues);
if (!Result.ContainsKey("exception") && (Verbosity > 2 || DebugMode))
{
if (psErrorRecord != null)
{
string traceback = String.Format("{0}\r\n{1}", psErrorRecord.ToString(), psErrorRecord.InvocationInfo.PositionMessage);
traceback += String.Format("\r\n + CategoryInfo : {0}", psErrorRecord.CategoryInfo.ToString());
traceback += String.Format("\r\n + FullyQualifiedErrorId : {0}", psErrorRecord.FullyQualifiedErrorId.ToString());
traceback += String.Format("\r\n\r\nScriptStackTrace:\r\n{0}", psErrorRecord.ScriptStackTrace);
Result["exception"] = traceback;
}
else if (exception != null)
Result["exception"] = exception.ToString();
}
CleanupFiles(null, null);
WriteLine(GetFormattedResults(Result));
Exit(1);
}
public void LogEvent(string message, EventLogEntryType logEntryType = EventLogEntryType.Information, bool sanitise = true)
{
if (NoLog)
return;
#if WINDOWS
string logSource = "Ansible";
bool logSourceExists = false;
try
{
logSourceExists = EventLog.SourceExists(logSource);
}
catch (System.Security.SecurityException) { } // non admin users may not have permission
if (!logSourceExists)
{
try
{
EventLog.CreateEventSource(logSource, "Application");
}
catch (System.Security.SecurityException)
{
// Cannot call Warn as that calls LogEvent and we get stuck in a loop
warnings.Add(String.Format("Access error when creating EventLog source {0}, logging to the Application source instead", logSource));
logSource = "Application";
}
}
if (sanitise)
message = (string)RemoveNoLogValues(message, noLogValues);
message = String.Format("{0} - {1}", ModuleName, message);
using (EventLog eventLog = new EventLog("Application"))
{
eventLog.Source = logSource;
try
{
eventLog.WriteEntry(message, logEntryType, 0);
}
catch (System.InvalidOperationException) { } // Ignore permission errors on the Application event log
catch (System.Exception e)
{
// Cannot call Warn as that calls LogEvent and we get stuck in a loop
warnings.Add(String.Format("Unknown error when creating event log entry: {0}", e.Message));
}
}
#else
// Windows Event Log is only available on Windows
return;
#endif
}
public void Warn(string message)
{
warnings.Add(message);
LogEvent(String.Format("[WARNING] {0}", message), EventLogEntryType.Warning);
}
public static object FromJson(string json) { return FromJson<object>(json); }
public static T FromJson<T>(string json)
{
#if CORECLR
return JsonConvert.DeserializeObject<T>(json);
#else
JavaScriptSerializer jss = new JavaScriptSerializer();
jss.MaxJsonLength = int.MaxValue;
jss.RecursionLimit = int.MaxValue;
return jss.Deserialize<T>(json);
#endif
}
public static string ToJson(object obj)
{
// Using PowerShell to serialize the JSON is preferable over the native .NET libraries as it handles
// PS Objects a lot better than the alternatives. In case we are debugging in Visual Studio we have a
// fallback to the other libraries as we won't be dealing with PowerShell objects there.
if (Runspace.DefaultRunspace != null)
{
PSObject rawOut = ScriptBlock.Create("ConvertTo-Json -InputObject $args[0] -Depth 99 -Compress").Invoke(obj)[0];
return rawOut.BaseObject as string;
}
else
{
#if CORECLR
return JsonConvert.SerializeObject(obj);
#else
JavaScriptSerializer jss = new JavaScriptSerializer();
jss.MaxJsonLength = int.MaxValue;
jss.RecursionLimit = int.MaxValue;
return jss.Serialize(obj);
#endif
}
}
public static IDictionary GetParams(string[] args)
{
if (args.Length > 0)
{
string inputJson = File.ReadAllText(args[0]);
Dictionary<string, object> rawParams = FromJson<Dictionary<string, object>>(inputJson);
if (!rawParams.ContainsKey("ANSIBLE_MODULE_ARGS"))
throw new ArgumentException("Module was unable to get ANSIBLE_MODULE_ARGS value from the argument path json");
return (IDictionary)rawParams["ANSIBLE_MODULE_ARGS"];
}
else
{
// $complex_args is already a Hashtable, no need to waste time converting to a dictionary
PSObject rawArgs = ScriptBlock.Create("$complex_args").Invoke()[0];
return rawArgs.BaseObject as Hashtable;
}
}
public static bool ParseBool(object value)
{
if (value.GetType() == typeof(bool))
return (bool)value;
List<string> booleans = new List<string>();
booleans.AddRange(BOOLEANS_TRUE);
booleans.AddRange(BOOLEANS_FALSE);
string stringValue = ParseStr(value).ToLowerInvariant().Trim();
if (BOOLEANS_TRUE.Contains(stringValue))
return true;
else if (BOOLEANS_FALSE.Contains(stringValue))
return false;
string msg = String.Format("The value '{0}' is not a valid boolean. Valid booleans include: {1}",
stringValue, String.Join(", ", booleans));
throw new ArgumentException(msg);
}
public static Dictionary<string, object> ParseDict(object value)
{
Type valueType = value.GetType();
if (valueType == typeof(Dictionary<string, object>))
return (Dictionary<string, object>)value;
else if (value is IDictionary)
return ((IDictionary)value).Cast<DictionaryEntry>().ToDictionary(kvp => (string)kvp.Key, kvp => kvp.Value);
else if (valueType == typeof(string))
{
string stringValue = (string)value;
if (stringValue.StartsWith("{") && stringValue.EndsWith("}"))
return FromJson<Dictionary<string, object>>((string)value);
else if (stringValue.IndexOfAny(new char[1] { '=' }) != -1)
{
List<string> fields = new List<string>();
List<char> fieldBuffer = new List<char>();
char? inQuote = null;
bool inEscape = false;
string field;
foreach (char c in stringValue.ToCharArray())
{
if (inEscape)
{
fieldBuffer.Add(c);
inEscape = false;
}
else if (c == '\\')
inEscape = true;
else if (inQuote == null && (c == '\'' || c == '"'))
inQuote = c;
else if (inQuote != null && c == inQuote)
inQuote = null;
else if (inQuote == null && (c == ',' || c == ' '))
{
field = String.Join("", fieldBuffer);
if (field != "")
fields.Add(field);
fieldBuffer = new List<char>();
}
else
fieldBuffer.Add(c);
}
field = String.Join("", fieldBuffer);
if (field != "")
fields.Add(field);
return fields.Distinct().Select(i => i.Split(new[] { '=' }, 2)).ToDictionary(i => i[0], i => i.Length > 1 ? (object)i[1] : null);
}
else
throw new ArgumentException("string cannot be converted to a dict, must either be a JSON string or in the key=value form");
}
throw new ArgumentException(String.Format("{0} cannot be converted to a dict", valueType.FullName));
}
public static float ParseFloat(object value)
{
if (value.GetType() == typeof(float))
return (float)value;
string valueStr = ParseStr(value);
return float.Parse(valueStr);
}
public static int ParseInt(object value)
{
Type valueType = value.GetType();
if (valueType == typeof(int))
return (int)value;
else
return Int32.Parse(ParseStr(value));
}
public static string ParseJson(object value)
{
// mostly used to ensure a dict is a json string as it may
// have been converted on the controller side
Type valueType = value.GetType();
if (value is IDictionary)
return ToJson(value);
else if (valueType == typeof(string))
return (string)value;
else
throw new ArgumentException(String.Format("{0} cannot be converted to json", valueType.FullName));
}
public static List<object> ParseList(object value)
{
if (value == null)
return null;
Type valueType = value.GetType();
if (valueType.IsGenericType && valueType.GetGenericTypeDefinition() == typeof(List<>))
return (List<object>)value;
else if (valueType == typeof(ArrayList))
return ((ArrayList)value).Cast<object>().ToList();
else if (valueType.IsArray)
return ((object[])value).ToList();
else if (valueType == typeof(string))
return ((string)value).Split(',').Select(s => s.Trim()).ToList<object>();
else if (valueType == typeof(int))
return new List<object>() { value };
else
throw new ArgumentException(String.Format("{0} cannot be converted to a list", valueType.FullName));
}
public static string ParsePath(object value)
{
string stringValue = ParseStr(value);
// do not validate, expand the env vars if it starts with \\?\ as
// it is a special path designed for the NT kernel to interpret
if (stringValue.StartsWith(@"\\?\"))
return stringValue;
stringValue = Environment.ExpandEnvironmentVariables(stringValue);
if (stringValue.IndexOfAny(Path.GetInvalidPathChars()) != -1)
throw new ArgumentException("string value contains invalid path characters, cannot convert to path");
// will fire an exception if it contains any invalid chars
Path.GetFullPath(stringValue);
return stringValue;
}
public static object ParseRaw(object value) { return value; }
public static SecurityIdentifier ParseSid(object value)
{
string stringValue = ParseStr(value);
try
{
return new SecurityIdentifier(stringValue);
}
catch (ArgumentException) { } // ignore failures string may not have been a SID
NTAccount account = new NTAccount(stringValue);
return (SecurityIdentifier)account.Translate(typeof(SecurityIdentifier));
}
public static string ParseStr(object value) { return value.ToString(); }
private void ValidateArgumentSpec(IDictionary argumentSpec)
{
Dictionary<string, object> changedValues = new Dictionary<string, object>();
foreach (DictionaryEntry entry in argumentSpec)
{
string key = (string)entry.Key;
// validate the key is a valid argument spec key
if (!specDefaults.ContainsKey(key))
{
string msg = String.Format("argument spec entry contains an invalid key '{0}', valid keys: {1}",
key, String.Join(", ", specDefaults.Keys));
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
// ensure the value is casted to the type we expect
Type optionType = null;
if (entry.Value != null)
optionType = (Type)specDefaults[key][1];
if (optionType != null)
{
Type actualType = entry.Value.GetType();
bool invalid = false;
if (optionType.IsGenericType && optionType.GetGenericTypeDefinition() == typeof(List<>))
{
// verify the actual type is not just a single value of the list type
Type entryType = optionType.GetGenericArguments()[0];
object[] arrayElementTypes = new object[]
{
null, // ArrayList does not have an ElementType
entryType,
typeof(object), // Hope the object is actually entryType or it can at least be casted.
};
bool isArray = entry.Value is IList && arrayElementTypes.Contains(actualType.GetElementType());
if (actualType == entryType || isArray)
{
object rawArray;
if (isArray)
rawArray = entry.Value;
else
rawArray = new object[1] { entry.Value };
MethodInfo castMethod = typeof(Enumerable).GetMethod("Cast").MakeGenericMethod(entryType);
MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList").MakeGenericMethod(entryType);
var enumerable = castMethod.Invoke(null, new object[1] { rawArray });
var newList = toListMethod.Invoke(null, new object[1] { enumerable });
changedValues.Add(key, newList);
}
else if (actualType != optionType && !(actualType == typeof(List<object>)))
invalid = true;
}
else
invalid = actualType != optionType;
if (invalid)
{
string msg = String.Format("argument spec for '{0}' did not match expected type {1}: actual type {2}",
key, optionType.FullName, actualType.FullName);
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
}
// recursively validate the spec
if (key == "options" && entry.Value != null)
{
IDictionary optionsSpec = (IDictionary)entry.Value;
foreach (DictionaryEntry optionEntry in optionsSpec)
{
optionsContext.Add((string)optionEntry.Key);
IDictionary optionMeta = (IDictionary)optionEntry.Value;
ValidateArgumentSpec(optionMeta);
optionsContext.RemoveAt(optionsContext.Count - 1);
}
}
// validate the type and elements key type values are known types
if (key == "type" || key == "elements" && entry.Value != null)
{
Type valueType = entry.Value.GetType();
if (valueType == typeof(string))
{
string typeValue = (string)entry.Value;
if (!optionTypes.ContainsKey(typeValue))
{
string msg = String.Format("{0} '{1}' is unsupported", key, typeValue);
msg = String.Format("{0}. Valid types are: {1}", FormatOptionsContext(msg, " - "), String.Join(", ", optionTypes.Keys));
throw new ArgumentException(msg);
}
}
else if (!(entry.Value is Delegate))
{
string msg = String.Format("{0} must either be a string or delegate, was: {1}", key, valueType.FullName);
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
}
}
// Outside of the spec iterator, change the values that were casted above
foreach (KeyValuePair<string, object> changedValue in changedValues)
argumentSpec[changedValue.Key] = changedValue.Value;
}
private void MergeFragmentSpec(IDictionary argumentSpec, IDictionary fragment)
{
foreach (DictionaryEntry fragmentEntry in fragment)
{
string fragmentKey = fragmentEntry.Key.ToString();
if (argumentSpec.Contains(fragmentKey))
{
// We only want to add new list entries and merge dictionary new keys and values. Leave the other
// values as is in the argument spec as that takes priority over the fragment.
if (fragmentEntry.Value is IDictionary)
{
MergeFragmentSpec((IDictionary)argumentSpec[fragmentKey], (IDictionary)fragmentEntry.Value);
}
else if (fragmentEntry.Value is IList)
{
IList specValue = (IList)argumentSpec[fragmentKey];
foreach (object fragmentValue in (IList)fragmentEntry.Value)
specValue.Add(fragmentValue);
}
}
else
argumentSpec[fragmentKey] = fragmentEntry.Value;
}
}
private void SetArgumentSpecDefaults(IDictionary argumentSpec)
{
foreach (KeyValuePair<string, List<object>> metadataEntry in specDefaults)
{
List<object> defaults = metadataEntry.Value;
object defaultValue = defaults[0];
if (defaultValue != null && defaultValue.GetType() == typeof(Type).GetType())
defaultValue = Activator.CreateInstance((Type)defaultValue);
if (!argumentSpec.Contains(metadataEntry.Key))
argumentSpec[metadataEntry.Key] = defaultValue;
}
// Recursively set the defaults for any inner options.
foreach (DictionaryEntry entry in argumentSpec)
{
if (entry.Value == null || entry.Key.ToString() != "options")
continue;
IDictionary optionsSpec = (IDictionary)entry.Value;
foreach (DictionaryEntry optionEntry in optionsSpec)
{
optionsContext.Add((string)optionEntry.Key);
IDictionary optionMeta = (IDictionary)optionEntry.Value;
SetArgumentSpecDefaults(optionMeta);
optionsContext.RemoveAt(optionsContext.Count - 1);
}
}
}
private Dictionary<string, string> GetAliases(IDictionary argumentSpec, IDictionary parameters)
{
Dictionary<string, string> aliasResults = new Dictionary<string, string>();
foreach (DictionaryEntry entry in (IDictionary)argumentSpec["options"])
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
List<string> aliases = (List<string>)v["aliases"];
object defaultValue = v["default"];
bool required = (bool)v["required"];
if (defaultValue != null && required)
throw new ArgumentException(String.Format("required and default are mutually exclusive for {0}", k));
foreach (string alias in aliases)
{
aliasResults.Add(alias, k);
if (parameters.Contains(alias))
parameters[k] = parameters[alias];
}
List<Hashtable> deprecatedAliases = (List<Hashtable>)v["deprecated_aliases"];
foreach (Hashtable depInfo in deprecatedAliases)
{
foreach (string keyName in new List<string> { "name" })
{
if (!depInfo.ContainsKey(keyName))
{
string msg = String.Format("{0} is required in a deprecated_aliases entry", keyName);
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
}
if (!depInfo.ContainsKey("version") && !depInfo.ContainsKey("date"))
{
string msg = "One of version or date is required in a deprecated_aliases entry";
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
if (depInfo.ContainsKey("version") && depInfo.ContainsKey("date"))
{
string msg = "Only one of version or date is allowed in a deprecated_aliases entry";
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
if (depInfo.ContainsKey("date") && depInfo["date"].GetType() != typeof(DateTime))
{
string msg = "A deprecated_aliases date must be a DateTime object";
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
string collectionName = null;
if (depInfo.ContainsKey("collection_name"))
{
collectionName = (string)depInfo["collection_name"];
}
string aliasName = (string)depInfo["name"];
if (parameters.Contains(aliasName))
{
string msg = String.Format("Alias '{0}' is deprecated. See the module docs for more information", aliasName);
if (depInfo.ContainsKey("version"))
{
string depVersion = (string)depInfo["version"];
Deprecate(FormatOptionsContext(msg, " - "), depVersion, collectionName);
}
if (depInfo.ContainsKey("date"))
{
DateTime depDate = (DateTime)depInfo["date"];
Deprecate(FormatOptionsContext(msg, " - "), depDate, collectionName);
}
}
}
}
return aliasResults;
}
private void SetNoLogValues(IDictionary argumentSpec, IDictionary parameters)
{
foreach (DictionaryEntry entry in (IDictionary)argumentSpec["options"])
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
if ((bool)v["no_log"])
{
object noLogObject = parameters.Contains(k) ? parameters[k] : null;
string noLogString = noLogObject == null ? "" : noLogObject.ToString();
if (!String.IsNullOrEmpty(noLogString))
noLogValues.Add(noLogString);
}
string collectionName = null;
if (v.ContainsKey("removed_from_collection"))
{
collectionName = (string)v["removed_from_collection"];
}
object removedInVersion = v["removed_in_version"];
if (removedInVersion != null && parameters.Contains(k))
Deprecate(String.Format("Param '{0}' is deprecated. See the module docs for more information", k),
removedInVersion.ToString(), collectionName);
object removedAtDate = v["removed_at_date"];
if (removedAtDate != null && parameters.Contains(k))
Deprecate(String.Format("Param '{0}' is deprecated. See the module docs for more information", k),
(DateTime)removedAtDate, collectionName);
}
}
private void CheckArguments(IDictionary spec, IDictionary param, List<string> legalInputs)
{
// initially parse the params and check for unsupported ones and set internal vars
CheckUnsupportedArguments(param, legalInputs);
// Only run this check if we are at the root argument (optionsContext.Count == 0)
if (CheckMode && !(bool)spec["supports_check_mode"] && optionsContext.Count == 0)
{
Result["skipped"] = true;
Result["msg"] = String.Format("remote module ({0}) does not support check mode", ModuleName);
ExitJson();
}
IDictionary optionSpec = (IDictionary)spec["options"];
CheckMutuallyExclusive(param, (IList)spec["mutually_exclusive"]);
CheckRequiredArguments(optionSpec, param);
// set the parameter types based on the type spec value
foreach (DictionaryEntry entry in optionSpec)
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
object value = param.Contains(k) ? param[k] : null;
if (value != null)
{
// convert the current value to the wanted type
Delegate typeConverter;
string type;
if (v["type"].GetType() == typeof(string))
{
type = (string)v["type"];
typeConverter = optionTypes[type];
}
else
{
type = "delegate";
typeConverter = (Delegate)v["type"];
}
try
{
value = typeConverter.DynamicInvoke(value);
param[k] = value;
}
catch (Exception e)
{
string msg = String.Format("argument for {0} is of type {1} and we were unable to convert to {2}: {3}",
k, value.GetType(), type, e.InnerException.Message);
FailJson(FormatOptionsContext(msg));
}
// ensure it matches the choices if there are choices set
List<string> choices = ((List<object>)v["choices"]).Select(x => x.ToString()).Cast<string>().ToList();
if (choices.Count > 0)
{
List<string> values;
string choiceMsg;
if (type == "list")
{
values = ((List<object>)value).Select(x => x.ToString()).Cast<string>().ToList();
choiceMsg = "one or more of";
}
else
{
values = new List<string>() { value.ToString() };
choiceMsg = "one of";
}
List<string> diffList = values.Except(choices, StringComparer.OrdinalIgnoreCase).ToList();
List<string> caseDiffList = values.Except(choices).ToList();
if (diffList.Count > 0)
{
string msg = String.Format("value of {0} must be {1}: {2}. Got no match for: {3}",
k, choiceMsg, String.Join(", ", choices), String.Join(", ", diffList));
FailJson(FormatOptionsContext(msg));
}
/*
For now we will just silently accept case insensitive choices, uncomment this if we want to add it back in
else if (caseDiffList.Count > 0)
{
// For backwards compatibility with Legacy.psm1 we need to be matching choices that are not case sensitive.
// We will warn the user it was case insensitive and tell them this will become case sensitive in the future.
string msg = String.Format(
"value of {0} was a case insensitive match of {1}: {2}. Checking of choices will be case sensitive in a future Ansible release. Case insensitive matches were: {3}",
k, choiceMsg, String.Join(", ", choices), String.Join(", ", caseDiffList.Select(x => RemoveNoLogValues(x, noLogValues)))
);
Warn(FormatOptionsContext(msg));
}*/
}
}
}
CheckRequiredTogether(param, (IList)spec["required_together"]);
CheckRequiredOneOf(param, (IList)spec["required_one_of"]);
CheckRequiredIf(param, (IList)spec["required_if"]);
CheckRequiredBy(param, (IDictionary)spec["required_by"]);
// finally ensure all missing parameters are set to null and handle sub options
foreach (DictionaryEntry entry in optionSpec)
{
string k = (string)entry.Key;
IDictionary v = (IDictionary)entry.Value;
if (!param.Contains(k))
param[k] = null;
CheckSubOption(param, k, v);
}
}
private void CheckUnsupportedArguments(IDictionary param, List<string> legalInputs)
{
HashSet<string> unsupportedParameters = new HashSet<string>();
HashSet<string> caseUnsupportedParameters = new HashSet<string>();
List<string> removedParameters = new List<string>();
foreach (DictionaryEntry entry in param)
{
string paramKey = (string)entry.Key;
if (!legalInputs.Contains(paramKey, StringComparer.OrdinalIgnoreCase))
unsupportedParameters.Add(paramKey);
else if (!legalInputs.Contains(paramKey))
// For backwards compatibility we do not care about the case but we need to warn the users as this will
// change in a future Ansible release.
caseUnsupportedParameters.Add(paramKey);
else if (paramKey.StartsWith("_ansible_"))
{
removedParameters.Add(paramKey);
string key = paramKey.Replace("_ansible_", "");
// skip setting NoLog if NoLog is already set to true (set by the module)
// or there's no mapping for this key
if ((key == "no_log" && NoLog == true) || (passVars[key] == null))
continue;
object value = entry.Value;
if (passBools.Contains(key))
value = ParseBool(value);
else if (passInts.Contains(key))
value = ParseInt(value);
string propertyName = passVars[key];
PropertyInfo property = typeof(AnsibleModule).GetProperty(propertyName);
FieldInfo field = typeof(AnsibleModule).GetField(propertyName, BindingFlags.NonPublic | BindingFlags.Instance);
if (property != null)
property.SetValue(this, value, null);
else if (field != null)
field.SetValue(this, value);
else
FailJson(String.Format("implementation error: unknown AnsibleModule property {0}", propertyName));
}
}
foreach (string parameter in removedParameters)
param.Remove(parameter);
if (unsupportedParameters.Count > 0)
{
legalInputs.RemoveAll(x => passVars.Keys.Contains(x.Replace("_ansible_", "")));
string msg = String.Format("Unsupported parameters for ({0}) module: {1}", ModuleName, String.Join(", ", unsupportedParameters));
msg = String.Format("{0}. Supported parameters include: {1}", FormatOptionsContext(msg), String.Join(", ", legalInputs));
FailJson(msg);
}
/*
// Uncomment when we want to start warning users around options that are not a case sensitive match to the spec
if (caseUnsupportedParameters.Count > 0)
{
legalInputs.RemoveAll(x => passVars.Keys.Contains(x.Replace("_ansible_", "")));
string msg = String.Format("Parameters for ({0}) was a case insensitive match: {1}", ModuleName, String.Join(", ", caseUnsupportedParameters));
msg = String.Format("{0}. Module options will become case sensitive in a future Ansible release. Supported parameters include: {1}",
FormatOptionsContext(msg), String.Join(", ", legalInputs));
Warn(msg);
}*/
// Make sure we convert all the incorrect case params to the ones set by the module spec
foreach (string key in caseUnsupportedParameters)
{
string correctKey = legalInputs[legalInputs.FindIndex(s => s.Equals(key, StringComparison.OrdinalIgnoreCase))];
object value = param[key];
param.Remove(key);
param.Add(correctKey, value);
}
}
private void CheckMutuallyExclusive(IDictionary param, IList mutuallyExclusive)
{
if (mutuallyExclusive == null)
return;
foreach (object check in mutuallyExclusive)
{
List<string> mutualCheck = ((IList)check).Cast<string>().ToList();
int count = 0;
foreach (string entry in mutualCheck)
if (param.Contains(entry))
count++;
if (count > 1)
{
string msg = String.Format("parameters are mutually exclusive: {0}", String.Join(", ", mutualCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredArguments(IDictionary spec, IDictionary param)
{
List<string> missing = new List<string>();
foreach (DictionaryEntry entry in spec)
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
// set defaults for values not already set
object defaultValue = v["default"];
if (defaultValue != null && !param.Contains(k))
param[k] = defaultValue;
// check required arguments
bool required = (bool)v["required"];
if (required && !param.Contains(k))
missing.Add(k);
}
if (missing.Count > 0)
{
string msg = String.Format("missing required arguments: {0}", String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
private void CheckRequiredTogether(IDictionary param, IList requiredTogether)
{
if (requiredTogether == null)
return;
foreach (object check in requiredTogether)
{
List<string> requiredCheck = ((IList)check).Cast<string>().ToList();
List<bool> found = new List<bool>();
foreach (string field in requiredCheck)
if (param.Contains(field))
found.Add(true);
else
found.Add(false);
if (found.Contains(true) && found.Contains(false))
{
string msg = String.Format("parameters are required together: {0}", String.Join(", ", requiredCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredOneOf(IDictionary param, IList requiredOneOf)
{
if (requiredOneOf == null)
return;
foreach (object check in requiredOneOf)
{
List<string> requiredCheck = ((IList)check).Cast<string>().ToList();
int count = 0;
foreach (string field in requiredCheck)
if (param.Contains(field))
count++;
if (count == 0)
{
string msg = String.Format("one of the following is required: {0}", String.Join(", ", requiredCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredIf(IDictionary param, IList requiredIf)
{
if (requiredIf == null)
return;
foreach (object check in requiredIf)
{
IList requiredCheck = (IList)check;
List<string> missing = new List<string>();
List<string> missingFields = new List<string>();
int maxMissingCount = 1;
bool oneRequired = false;
if (requiredCheck.Count < 3 && requiredCheck.Count < 4)
FailJson(String.Format("internal error: invalid required_if value count of {0}, expecting 3 or 4 entries", requiredCheck.Count));
else if (requiredCheck.Count == 4)
oneRequired = (bool)requiredCheck[3];
string key = (string)requiredCheck[0];
object val = requiredCheck[1];
IList requirements = (IList)requiredCheck[2];
if (ParseStr(param[key]) != ParseStr(val))
continue;
string term = "all";
if (oneRequired)
{
maxMissingCount = requirements.Count;
term = "any";
}
foreach (string required in requirements.Cast<string>())
if (!param.Contains(required))
missing.Add(required);
if (missing.Count >= maxMissingCount)
{
string msg = String.Format("{0} is {1} but {2} of the following are missing: {3}",
key, val.ToString(), term, String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredBy(IDictionary param, IDictionary requiredBy)
{
foreach (DictionaryEntry entry in requiredBy)
{
string key = (string)entry.Key;
if (!param.Contains(key))
continue;
List<string> missing = new List<string>();
List<string> requires = ParseList(entry.Value).Cast<string>().ToList();
foreach (string required in requires)
if (!param.Contains(required))
missing.Add(required);
if (missing.Count > 0)
{
string msg = String.Format("missing parameter(s) required by '{0}': {1}", key, String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckSubOption(IDictionary param, string key, IDictionary spec)
{
object value = param[key];
string type;
if (spec["type"].GetType() == typeof(string))
type = (string)spec["type"];
else
type = "delegate";
string elements = null;
Delegate typeConverter = null;
if (spec["elements"] != null && spec["elements"].GetType() == typeof(string))
{
elements = (string)spec["elements"];
typeConverter = optionTypes[elements];
}
else if (spec["elements"] != null)
{
elements = "delegate";
typeConverter = (Delegate)spec["elements"];
}
if (!(type == "dict" || (type == "list" && elements != null)))
// either not a dict, or list with the elements set, so continue
return;
else if (type == "list")
{
// cast each list element to the type specified
if (value == null)
return;
List<object> newValue = new List<object>();
foreach (object element in (List<object>)value)
{
if (elements == "dict")
newValue.Add(ParseSubSpec(spec, element, key));
else
{
try
{
object newElement = typeConverter.DynamicInvoke(element);
newValue.Add(newElement);
}
catch (Exception e)
{
string msg = String.Format("argument for list entry {0} is of type {1} and we were unable to convert to {2}: {3}",
key, element.GetType(), elements, e.Message);
FailJson(FormatOptionsContext(msg));
}
}
}
param[key] = newValue;
}
else
param[key] = ParseSubSpec(spec, value, key);
}
private object ParseSubSpec(IDictionary spec, object value, string context)
{
bool applyDefaults = (bool)spec["apply_defaults"];
// set entry to an empty dict if apply_defaults is set
IDictionary optionsSpec = (IDictionary)spec["options"];
if (applyDefaults && optionsSpec.Keys.Count > 0 && value == null)
value = new Dictionary<string, object>();
else if (optionsSpec.Keys.Count == 0 || value == null)
return value;
optionsContext.Add(context);
Dictionary<string, object> newValue = (Dictionary<string, object>)ParseDict(value);
Dictionary<string, string> aliases = GetAliases(spec, newValue);
SetNoLogValues(spec, newValue);
List<string> subLegalInputs = optionsSpec.Keys.Cast<string>().ToList();
subLegalInputs.AddRange(aliases.Keys.Cast<string>().ToList());
CheckArguments(spec, newValue, subLegalInputs);
optionsContext.RemoveAt(optionsContext.Count - 1);
return newValue;
}
private string GetFormattedResults(Dictionary<string, object> result)
{
if (!result.ContainsKey("invocation"))
result["invocation"] = new Dictionary<string, object>() { { "module_args", RemoveNoLogValues(Params, noLogValues) } };
if (warnings.Count > 0)
result["warnings"] = warnings;
if (deprecations.Count > 0)
result["deprecations"] = deprecations;
if (Diff.Count > 0 && DiffMode)
result["diff"] = Diff;
return ToJson(result);
}
private string FormatLogData(object data, int indentLevel)
{
if (data == null)
return "$null";
string msg = "";
if (data is IList)
{
string newMsg = "";
foreach (object value in (IList)data)
{
string entryValue = FormatLogData(value, indentLevel + 2);
newMsg += String.Format("\r\n{0}- {1}", new String(' ', indentLevel), entryValue);
}
msg += newMsg;
}
else if (data is IDictionary)
{
bool start = true;
foreach (DictionaryEntry entry in (IDictionary)data)
{
string newMsg = FormatLogData(entry.Value, indentLevel + 2);
if (!start)
msg += String.Format("\r\n{0}", new String(' ', indentLevel));
msg += String.Format("{0}: {1}", (string)entry.Key, newMsg);
start = false;
}
}
else
msg = (string)RemoveNoLogValues(ParseStr(data), noLogValues);
return msg;
}
private object RemoveNoLogValues(object value, HashSet<string> noLogStrings)
{
Queue<Tuple<object, object>> deferredRemovals = new Queue<Tuple<object, object>>();
object newValue = RemoveValueConditions(value, noLogStrings, deferredRemovals);
while (deferredRemovals.Count > 0)
{
Tuple<object, object> data = deferredRemovals.Dequeue();
object oldData = data.Item1;
object newData = data.Item2;
if (oldData is IDictionary)
{
foreach (DictionaryEntry entry in (IDictionary)oldData)
{
object newElement = RemoveValueConditions(entry.Value, noLogStrings, deferredRemovals);
((IDictionary)newData).Add((string)entry.Key, newElement);
}
}
else
{
foreach (object element in (IList)oldData)
{
object newElement = RemoveValueConditions(element, noLogStrings, deferredRemovals);
((IList)newData).Add(newElement);
}
}
}
return newValue;
}
private object RemoveValueConditions(object value, HashSet<string> noLogStrings, Queue<Tuple<object, object>> deferredRemovals)
{
if (value == null)
return value;
Type valueType = value.GetType();
HashSet<Type> numericTypes = new HashSet<Type>
{
typeof(byte), typeof(sbyte), typeof(short), typeof(ushort), typeof(int), typeof(uint),
typeof(long), typeof(ulong), typeof(decimal), typeof(double), typeof(float)
};
if (numericTypes.Contains(valueType) || valueType == typeof(bool))
{
string valueString = ParseStr(value);
if (noLogStrings.Contains(valueString))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
foreach (string omitMe in noLogStrings)
if (valueString.Contains(omitMe))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
}
else if (valueType == typeof(DateTime))
value = ((DateTime)value).ToString("o");
else if (value is IList)
{
List<object> newValue = new List<object>();
deferredRemovals.Enqueue(new Tuple<object, object>((IList)value, newValue));
value = newValue;
}
else if (value is IDictionary)
{
Hashtable newValue = new Hashtable();
deferredRemovals.Enqueue(new Tuple<object, object>((IDictionary)value, newValue));
value = newValue;
}
else
{
string stringValue = value.ToString();
if (noLogStrings.Contains(stringValue))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
foreach (string omitMe in noLogStrings)
if (stringValue.Contains(omitMe))
return (stringValue).Replace(omitMe, "********");
value = stringValue;
}
return value;
}
private void CleanupFiles(object s, EventArgs ev)
{
foreach (string path in cleanupFiles)
{
try
{
#if WINDOWS
FileCleaner.Delete(path);
#else
if (File.Exists(path))
File.Delete(path);
else if (Directory.Exists(path))
Directory.Delete(path, true);
#endif
}
catch (Exception e)
{
Warn(string.Format("Failure cleaning temp path '{0}': {1} {2}",
path, e.GetType().Name, e.Message));
}
}
cleanupFiles = new List<string>();
}
private string FormatOptionsContext(string msg, string prefix = " ")
{
if (optionsContext.Count > 0)
msg += String.Format("{0}found in {1}", prefix, String.Join(" -> ", optionsContext));
return msg;
}
[DllImport("kernel32.dll")]
private static extern IntPtr GetConsoleWindow();
private static void ExitModule(int rc)
{
// When running in a Runspace Environment.Exit will kill the entire
// process which is not what we want, detect if we are in a
// Runspace and call a ScriptBlock with exit instead.
if (Runspace.DefaultRunspace != null)
ScriptBlock.Create("Set-Variable -Name LASTEXITCODE -Value $args[0] -Scope Global; exit $args[0]").Invoke(rc);
else
{
// Used for local debugging in Visual Studio
if (System.Diagnostics.Debugger.IsAttached)
{
Console.WriteLine("Press enter to continue...");
Console.ReadLine();
}
Environment.Exit(rc);
}
}
private static void WriteLineModule(string line)
{
Console.WriteLine(line);
}
}
#if WINDOWS
// Windows is tricky as AVs and other software might still
// have an open handle to files causing a failure. Use a
// custom deletion mechanism to remove the files/dirs.
// https://github.com/ansible/ansible/pull/80247
internal static class FileCleaner
{
private const int FileDispositionInformation = 13;
private const int FileDispositionInformationEx = 64;
private const int ERROR_INVALID_PARAMETER = 0x00000057;
private const int ERROR_DIR_NOT_EMPTY = 0x00000091;
private static bool? _supportsPosixDelete = null;
[Flags()]
public enum DispositionFlags : uint
{
FILE_DISPOSITION_DO_NOT_DELETE = 0x00000000,
FILE_DISPOSITION_DELETE = 0x00000001,
FILE_DISPOSITION_POSIX_SEMANTICS = 0x00000002,
FILE_DISPOSITION_FORCE_IMAGE_SECTION_CHECK = 0x00000004,
FILE_DISPOSITION_ON_CLOSE = 0x00000008,
FILE_DISPOSITION_IGNORE_READONLY_ATTRIBUTE = 0x00000010,
}
[Flags()]
public enum FileFlags : uint
{
FILE_FLAG_OPEN_NO_RECALL = 0x00100000,
FILE_FLAG_OPEN_REPARSE_POINT = 0x00200000,
FILE_FLAG_SESSION_AWARE = 0x00800000,
FILE_FLAG_POSIX_SEMANTICS = 0x01000000,
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000,
FILE_FLAG_DELETE_ON_CLOSE = 0x04000000,
FILE_FLAG_SEQUENTIAL_SCAN = 0x08000000,
FILE_FLAG_RANDOM_ACCESS = 0x10000000,
FILE_FLAG_NO_BUFFERING = 0x20000000,
FILE_FLAG_OVERLAPPED = 0x40000000,
FILE_FLAG_WRITE_THROUGH = 0x80000000,
}
[DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
private static extern SafeFileHandle CreateFileW(
[MarshalAs(UnmanagedType.LPWStr)] string lpFileName,
FileSystemRights dwDesiredAccess,
FileShare dwShareMode,
IntPtr lpSecurityAttributes,
FileMode dwCreationDisposition,
uint dwFlagsAndAttributes,
IntPtr hTemplateFile);
private static SafeFileHandle CreateFile(string path, FileSystemRights access, FileShare share, FileMode mode,
FileAttributes attributes, FileFlags flags)
{
uint flagsAndAttributes = (uint)attributes | (uint)flags;
SafeFileHandle handle = CreateFileW(path, access, share, IntPtr.Zero, mode, flagsAndAttributes,
IntPtr.Zero);
if (handle.IsInvalid)
{
int errCode = Marshal.GetLastWin32Error();
string msg = string.Format("CreateFileW({0}) failed 0x{1:X8}: {2}",
path, errCode, new Win32Exception(errCode).Message);
throw new Win32Exception(errCode, msg);
}
return handle;
}
[DllImport("Ntdll.dll")]
private static extern int NtSetInformationFile(
SafeFileHandle FileHandle,
out IntPtr IoStatusBlock,
ref int FileInformation,
int Length,
int FileInformationClass);
[DllImport("Ntdll.dll")]
private static extern int RtlNtStatusToDosError(
int Status);
public static void Delete(string path)
{
if (File.Exists(path))
{
DeleteEntry(path, FileAttributes.ReadOnly);
}
else if (Directory.Exists(path))
{
Queue<DirectoryInfo> dirQueue = new Queue<DirectoryInfo>();
dirQueue.Enqueue(new DirectoryInfo(path));
bool nonEmptyDirs = false;
HashSet<string> processedDirs = new HashSet<string>();
while (dirQueue.Count > 0)
{
DirectoryInfo currentDir = dirQueue.Dequeue();
bool deleteDir = true;
if (processedDirs.Add(currentDir.FullName))
{
foreach (FileSystemInfo entry in currentDir.EnumerateFileSystemInfos())
{
// Tries to delete each entry. Failures are ignored
// as they will be picked up when the dir is
// deleted and not empty.
if (entry is DirectoryInfo)
{
if ((entry.Attributes & FileAttributes.ReparsePoint) != 0)
{
// If it's a reparse point, just delete it directly.
DeleteEntry(entry.FullName, entry.Attributes, ignoreFailure: true);
}
else
{
// Add the dir to the queue to delete and it will be processed next round.
dirQueue.Enqueue((DirectoryInfo)entry);
deleteDir = false;
}
}
else
{
DeleteEntry(entry.FullName, entry.Attributes, ignoreFailure: true);
}
}
}
if (deleteDir)
{
try
{
DeleteEntry(currentDir.FullName, FileAttributes.Directory);
}
catch (Win32Exception e)
{
if (e.NativeErrorCode == ERROR_DIR_NOT_EMPTY)
{
nonEmptyDirs = true;
}
else
{
throw;
}
}
}
else
{
dirQueue.Enqueue(currentDir);
}
}
if (nonEmptyDirs)
{
throw new IOException("Directory contains files still open by other processes");
}
}
}
private static void DeleteEntry(string path, FileAttributes attr, bool ignoreFailure = false)
{
try
{
if ((attr & FileAttributes.ReadOnly) != 0)
{
// Windows does not allow files set with ReadOnly to be
// deleted. Pre-emptively unset the attribute.
// FILE_DISPOSITION_IGNORE_READONLY_ATTRIBUTE is quite new,
// look at using that flag with POSIX delete once Server 2019
// is the baseline.
File.SetAttributes(path, FileAttributes.Normal);
}
// REPARSE - Only touch the symlink itself and not the target
// BACKUP - Needed for dir handles, bypasses access checks for admins
// DELETE_ON_CLOSE is not used as it interferes with the POSIX delete
FileFlags flags = FileFlags.FILE_FLAG_OPEN_REPARSE_POINT |
FileFlags.FILE_FLAG_BACKUP_SEMANTICS;
using (SafeFileHandle fileHandle = CreateFile(path, FileSystemRights.Delete,
FileShare.ReadWrite | FileShare.Delete, FileMode.Open, FileAttributes.Normal, flags))
{
if (_supportsPosixDelete == null || _supportsPosixDelete == true)
{
// A POSIX delete will delete the filesystem entry even if
// it's still opened by another process so favour that if
// available.
DispositionFlags deleteFlags = DispositionFlags.FILE_DISPOSITION_DELETE |
DispositionFlags.FILE_DISPOSITION_POSIX_SEMANTICS;
SetInformationFile(fileHandle, FileDispositionInformationEx, (int)deleteFlags);
if (_supportsPosixDelete == true)
{
return;
}
}
// FileDispositionInformation takes in a struct with only a BOOLEAN value.
// Using an int will also do the same thing to set that flag to true.
SetInformationFile(fileHandle, FileDispositionInformation, Int32.MaxValue);
}
}
catch
{
if (!ignoreFailure)
{
throw;
}
}
}
private static void SetInformationFile(SafeFileHandle handle, int infoClass, int value)
{
IntPtr ioStatusBlock = IntPtr.Zero;
int ntStatus = NtSetInformationFile(handle, out ioStatusBlock, ref value,
Marshal.SizeOf(typeof(int)), infoClass);
if (ntStatus != 0)
{
int errCode = RtlNtStatusToDosError(ntStatus);
// The POSIX delete was added in Server 2016 (Win 10 14393/Redstone 1)
// Mark this flag so we don't try again.
if (infoClass == FileDispositionInformationEx && _supportsPosixDelete == null &&
errCode == ERROR_INVALID_PARAMETER)
{
_supportsPosixDelete = false;
return;
}
string msg = string.Format("NtSetInformationFile() failed 0x{0:X8}: {1}",
errCode, new Win32Exception(errCode).Message);
throw new Win32Exception(errCode, msg);
}
if (infoClass == FileDispositionInformationEx)
{
_supportsPosixDelete = true;
}
}
}
#endif
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
changelogs/fragments/fix-templating-private-role-FA.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
lib/ansible/playbook/helpers.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.utils.display import Display
display = Display()
def load_list_of_blocks(ds, play, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of mixed task/block data (parsed from YAML),
return a list of Block() objects, where implicit blocks
are created for each bare Task.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
if not isinstance(ds, (list, type(None))):
raise AnsibleAssertionError('%s should be a list or None but is %s' % (ds, type(ds)))
block_list = []
if ds:
count = iter(range(len(ds)))
for i in count:
block_ds = ds[i]
# Implicit blocks are created by bare tasks listed in a play without
# an explicit block statement. If we have two implicit blocks in a row,
# squash them down to a single block to save processing time later.
implicit_blocks = []
while block_ds is not None and not Block.is_block(block_ds):
implicit_blocks.append(block_ds)
i += 1
# Advance the iterator, so we don't repeat
next(count, None)
try:
block_ds = ds[i]
except IndexError:
block_ds = None
# Loop both implicit blocks and block_ds as block_ds is the next in the list
for b in (implicit_blocks, block_ds):
if b:
block_list.append(
Block.load(
b,
play=play,
parent_block=parent_block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
)
return block_list
def load_list_of_tasks(ds, play, block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
'''
Given a list of task datastructures (parsed from YAML),
return a list of Task() or TaskInclude() objects.
'''
# we import here to prevent a circular dependency with imports
from ansible.playbook.block import Block
from ansible.playbook.handler import Handler
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.playbook.handler_task_include import HandlerTaskInclude
from ansible.template import Templar
from ansible.utils.plugin_docs import get_versioned_doclink
if not isinstance(ds, list):
raise AnsibleAssertionError('The ds (%s) should be a list but was a %s' % (ds, type(ds)))
task_list = []
for task_ds in ds:
if not isinstance(task_ds, dict):
raise AnsibleAssertionError('The ds (%s) should be a dict but was a %s' % (ds, type(ds)))
if 'block' in task_ds:
if use_handlers:
raise AnsibleParserError("Using a block as a handler is not supported.", obj=task_ds)
t = Block.load(
task_ds,
play=play,
parent_block=block,
role=role,
task_include=task_include,
use_handlers=use_handlers,
variable_manager=variable_manager,
loader=loader,
)
task_list.append(t)
else:
args_parser = ModuleArgsParser(task_ds)
try:
(action, args, delegate_to) = args_parser.parse(skip_action_validation=True)
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e.obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=task_ds, orig_exc=e)
if action in C._ACTION_ALL_INCLUDE_IMPORT_TASKS:
if use_handlers:
include_class = HandlerTaskInclude
else:
include_class = TaskInclude
t = include_class.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader
)
all_vars = variable_manager.get_vars(play=play, task=t)
templar = Templar(loader=loader, variables=all_vars)
# check to see if this include is dynamic or static:
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
if action in C._ACTION_INCLUDE_TASKS:
is_static = False
elif action in C._ACTION_IMPORT_TASKS:
is_static = True
else:
include_link = get_versioned_doclink('user_guide/playbooks_reuse_includes.html')
display.deprecated('"include" is deprecated, use include_tasks/import_tasks instead. See %s for details' % include_link, "2.16")
is_static = not templar.is_template(t.args['_raw_params']) and t.all_parents_static() and not t.loop
if is_static:
if t.loop is not None:
if action in C._ACTION_IMPORT_TASKS:
raise AnsibleParserError("You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include with a loop", obj=task_ds)
# we set a flag to indicate this include was static
t.statically_loaded = True
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = block
cumulative_path = None
found = False
subdir = 'tasks'
if use_handlers:
subdir = 'handlers'
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleUndefinedVariable as e:
if not parent_include.statically_loaded:
raise AnsibleParserError(
"Error when evaluating variable in dynamic parent include path: %s. "
"When using static imports, the parent dynamic include cannot utilize host facts "
"or variables from inventory" % parent_include.args.get('_raw_params'),
obj=task_ds,
suppress_extended_error=True,
orig_exc=e
)
raise
if cumulative_path is None:
cumulative_path = parent_include_dir
elif not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
include_target = templar.template(t.args['_raw_params'])
if t._role:
new_basedir = os.path.join(t._role._role_path, subdir, cumulative_path)
include_file = loader.path_dwim_relative(new_basedir, subdir, include_target)
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
found = True
break
else:
parent_include = parent_include._parent
if not found:
try:
include_target = templar.template(t.args['_raw_params'])
except AnsibleUndefinedVariable as e:
raise AnsibleParserError(
"Error when evaluating variable in import path: %s.\n\n"
"When using static imports, ensure that any variables used in their names are defined in vars/vars_files\n"
"or extra-vars passed in from the command line. Static imports cannot use variables from facts or inventory\n"
"sources like group or host vars." % t.args['_raw_params'],
obj=task_ds,
suppress_extended_error=True,
orig_exc=e)
if t._role:
include_file = loader.path_dwim_relative(t._role._role_path, subdir, include_target)
else:
include_file = loader.path_dwim(include_target)
data = loader.load_from_file(include_file)
if not data:
display.warning('file %s is empty and had no tasks to include' % include_file)
continue
elif not isinstance(data, list):
raise AnsibleParserError("included task files must contain a list of tasks", obj=data)
# since we can't send callbacks here, we display a message directly in
# the same fashion used by the on_include callback. We also do it here,
# because the recursive nature of helper methods means we may be loading
# nested includes, and we want the include order printed correctly
display.vv("statically imported: %s" % include_file)
ti_copy = t.copy(exclude_parent=True)
ti_copy._parent = block
included_blocks = load_list_of_blocks(
data,
play=play,
parent_block=None,
task_include=ti_copy,
role=role,
use_handlers=use_handlers,
loader=loader,
variable_manager=variable_manager,
)
tags = ti_copy.tags[:]
# now we extend the tags on each of the included blocks
for b in included_blocks:
b.tags = list(set(b.tags).union(tags))
# END FIXME
# FIXME: handlers shouldn't need this special handling, but do
# right now because they don't iterate blocks correctly
if use_handlers:
for b in included_blocks:
task_list.extend(b.block)
else:
task_list.extend(included_blocks)
else:
t.is_static = False
task_list.append(t)
elif action in C._ACTION_ALL_PROPER_INCLUDE_IMPORT_ROLES:
if use_handlers:
raise AnsibleParserError(f"Using '{action}' as a handler is not supported.", obj=task_ds)
ir = IncludeRole.load(
task_ds,
block=block,
role=role,
task_include=None,
variable_manager=variable_manager,
loader=loader,
)
# 1. the user has set the 'static' option to false or true
# 2. one of the appropriate config options was set
is_static = False
if action in C._ACTION_IMPORT_ROLE:
is_static = True
if is_static:
if ir.loop is not None:
if action in C._ACTION_IMPORT_ROLE:
raise AnsibleParserError("You cannot use loops on 'import_role' statements. You should use 'include_role' instead.", obj=task_ds)
else:
raise AnsibleParserError("You cannot use 'static' on an include_role with a loop", obj=task_ds)
# we set a flag to indicate this include was static
ir.statically_loaded = True
# template the role name now, if needed
all_vars = variable_manager.get_vars(play=play, task=ir)
templar = Templar(loader=loader, variables=all_vars)
ir._role_name = templar.template(ir._role_name)
# uses compiled list from object
blocks, _ = ir.get_block_list(variable_manager=variable_manager, loader=loader)
task_list.extend(blocks)
else:
# passes task object itself for latter generation of list
task_list.append(ir)
else:
if use_handlers:
t = Handler.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
else:
t = Task.load(task_ds, block=block, role=role, task_include=task_include, variable_manager=variable_manager, loader=loader)
task_list.append(t)
return task_list
def load_list_of_roles(ds, play, current_role_path=None, variable_manager=None, loader=None, collection_search_list=None):
"""
Loads and returns a list of RoleInclude objects from the ds list of role definitions
:param ds: list of roles to load
:param play: calling Play object
:param current_role_path: path of the owning role, if any
:param variable_manager: varmgr to use for templating
:param loader: loader to use for DS parsing/services
:param collection_search_list: list of collections to search for unqualified role names
:return:
"""
# we import here to prevent a circular dependency with imports
from ansible.playbook.role.include import RoleInclude
if not isinstance(ds, list):
raise AnsibleAssertionError('ds (%s) should be a list but was a %s' % (ds, type(ds)))
roles = []
for role_def in ds:
i = RoleInclude.load(role_def, play=play, current_role_path=current_role_path, variable_manager=variable_manager,
loader=loader, collection_list=collection_search_list)
roles.append(i)
return roles
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
lib/ansible/playbook/included_file.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.executor.task_executor import remove_omit
from ansible.module_utils._text import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role_include import IncludeRole
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class IncludedFile:
def __init__(self, filename, args, vars, task, is_role=False):
self._filename = filename
self._args = args
self._vars = vars
self._task = task
self._hosts = []
self._is_role = is_role
self._results = []
def add_host(self, host):
if host not in self._hosts:
self._hosts.append(host)
return
raise ValueError()
def __eq__(self, other):
return (other._filename == self._filename and
other._args == self._args and
other._vars == self._vars and
other._task._uuid == self._task._uuid and
other._task._parent._uuid == self._task._parent._uuid)
def __repr__(self):
return "%s (args=%s vars=%s): %s" % (self._filename, self._args, self._vars, self._hosts)
@staticmethod
def process_include_results(results, iterator, loader, variable_manager):
included_files = []
task_vars_cache = {}
for res in results:
original_host = res._host
original_task = res._task
if original_task.action in C._ACTION_ALL_INCLUDES:
if original_task.action in C._ACTION_INCLUDE:
display.deprecated('"include" is deprecated, use include_tasks/import_tasks/import_playbook instead', "2.16")
if original_task.loop:
if 'results' not in res._result:
continue
include_results = res._result['results']
else:
include_results = [res._result]
for include_result in include_results:
# if the task result was skipped or failed, continue
if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result and include_result['failed']:
continue
cache_key = (iterator._play, original_host, original_task)
try:
task_vars = task_vars_cache[cache_key]
except KeyError:
task_vars = task_vars_cache[cache_key] = variable_manager.get_vars(play=iterator._play, host=original_host, task=original_task)
include_args = include_result.get('include_args', dict())
special_vars = {}
loop_var = include_result.get('ansible_loop_var', 'item')
index_var = include_result.get('ansible_index_var')
if loop_var in include_result:
task_vars[loop_var] = special_vars[loop_var] = include_result[loop_var]
if index_var and index_var in include_result:
task_vars[index_var] = special_vars[index_var] = include_result[index_var]
if '_ansible_item_label' in include_result:
task_vars['_ansible_item_label'] = special_vars['_ansible_item_label'] = include_result['_ansible_item_label']
if 'ansible_loop' in include_result:
task_vars['ansible_loop'] = special_vars['ansible_loop'] = include_result['ansible_loop']
if original_task.no_log and '_ansible_no_log' not in include_args:
task_vars['_ansible_no_log'] = special_vars['_ansible_no_log'] = original_task.no_log
# get search path for this task to pass to lookup plugins that may be used in pathing to
# the included file
task_vars['ansible_search_path'] = original_task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if loader.get_basedir() not in task_vars['ansible_search_path']:
task_vars['ansible_search_path'].append(loader.get_basedir())
templar = Templar(loader=loader, variables=task_vars)
if original_task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_file = None
if original_task._parent:
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = original_task._parent
cumulative_path = None
while parent_include is not None:
if not isinstance(parent_include, TaskInclude):
parent_include = parent_include._parent
continue
if isinstance(parent_include, IncludeRole):
parent_include_dir = parent_include._role_path
else:
try:
parent_include_dir = os.path.dirname(templar.template(parent_include.args.get('_raw_params')))
except AnsibleError as e:
parent_include_dir = ''
display.warning(
'Templating the path of the parent %s failed. The path to the '
'included file may not be found. '
'The error was: %s.' % (original_task.action, to_text(e))
)
if cumulative_path is not None and not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
else:
cumulative_path = parent_include_dir
include_target = templar.template(include_result['include'])
if original_task._role:
new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path)
candidates = [loader.path_dwim_relative(original_task._role._role_path, 'tasks', include_target),
loader.path_dwim_relative(new_basedir, 'tasks', include_target)]
for include_file in candidates:
try:
# may throw OSError
os.stat(include_file)
# or select the task file if it exists
break
except OSError:
pass
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
break
else:
parent_include = parent_include._parent
if include_file is None:
if original_task._role:
include_target = templar.template(include_result['include'])
include_file = loader.path_dwim_relative(
original_task._role._role_path,
'handlers' if isinstance(original_task, Handler) else 'tasks',
include_target,
is_role=True)
else:
include_file = loader.path_dwim(include_result['include'])
include_file = templar.template(include_file)
inc_file = IncludedFile(include_file, include_args, special_vars, original_task)
else:
# template the included role's name here
role_name = include_args.pop('name', include_args.pop('role', None))
if role_name is not None:
role_name = templar.template(role_name)
new_task = original_task.copy()
new_task._role_name = role_name
for from_arg in new_task.FROM_ARGS:
if from_arg in include_args:
from_key = from_arg.removesuffix('_from')
new_task._from_files[from_key] = templar.template(include_args.pop(from_arg))
omit_token = task_vars.get('omit')
if omit_token:
new_task._from_files = remove_omit(new_task._from_files, omit_token)
inc_file = IncludedFile(role_name, include_args, special_vars, new_task, is_role=True)
idx = 0
orig_inc_file = inc_file
while 1:
try:
pos = included_files[idx:].index(orig_inc_file)
# pos is relative to idx since we are slicing
# use idx + pos due to relative indexing
inc_file = included_files[idx + pos]
except ValueError:
included_files.append(orig_inc_file)
inc_file = orig_inc_file
try:
inc_file.add_host(original_host)
inc_file._results.append(res)
except ValueError:
# The host already exists for this include, advance forward, this is a new include
idx += pos + 1
else:
break
return included_files
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
lib/ansible/playbook/role_include.py
|
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from os.path import basename
import ansible.constants as C
from ansible.errors import AnsibleParserError
from ansible.playbook.attribute import NonInheritableFieldAttribute
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.role import Role
from ansible.playbook.role.include import RoleInclude
from ansible.utils.display import Display
from ansible.module_utils.six import string_types
from ansible.template import Templar
__all__ = ['IncludeRole']
display = Display()
class IncludeRole(TaskInclude):
"""
A Role include is derived from a regular role to handle the special
circumstances related to the `- include_role: ...`
"""
BASE = frozenset(('name', 'role')) # directly assigned
FROM_ARGS = frozenset(('tasks_from', 'vars_from', 'defaults_from', 'handlers_from')) # used to populate from dict in role
OTHER_ARGS = frozenset(('apply', 'public', 'allow_duplicates', 'rolespec_validate')) # assigned to matching property
VALID_ARGS = BASE | FROM_ARGS | OTHER_ARGS # all valid args
# =================================================================================
# ATTRIBUTES
# private as this is a 'module options' vs a task property
allow_duplicates = NonInheritableFieldAttribute(isa='bool', default=True, private=True)
public = NonInheritableFieldAttribute(isa='bool', default=False, private=True)
rolespec_validate = NonInheritableFieldAttribute(isa='bool', default=True)
def __init__(self, block=None, role=None, task_include=None):
super(IncludeRole, self).__init__(block=block, role=role, task_include=task_include)
self._from_files = {}
self._parent_role = role
self._role_name = None
self._role_path = None
def get_name(self):
''' return the name of the task '''
return self.name or "%s : %s" % (self.action, self._role_name)
def get_block_list(self, play=None, variable_manager=None, loader=None):
# only need play passed in when dynamic
if play is None:
myplay = self._parent._play
else:
myplay = play
ri = RoleInclude.load(self._role_name, play=myplay, variable_manager=variable_manager, loader=loader, collection_list=self.collections)
ri.vars |= self.vars
if variable_manager is not None:
available_variables = variable_manager.get_vars(play=myplay, task=self)
else:
available_variables = {}
templar = Templar(loader=loader, variables=available_variables)
from_files = templar.template(self._from_files)
# build role
actual_role = Role.load(ri, myplay, parent_role=self._parent_role, from_files=from_files,
from_include=True, validate=self.rolespec_validate, public=self.public)
actual_role._metadata.allow_duplicates = self.allow_duplicates
if self.statically_loaded or self.public:
myplay.roles.append(actual_role)
# save this for later use
self._role_path = actual_role._role_path
# compile role with parent roles as dependencies to ensure they inherit
# variables
dep_chain = actual_role.get_dep_chain()
p_block = self.build_parent_block()
# collections value is not inherited; override with the value we calculated during role setup
p_block.collections = actual_role.collections
blocks = actual_role.compile(play=myplay, dep_chain=dep_chain)
for b in blocks:
b._parent = p_block
# HACK: parent inheritance doesn't seem to have a way to handle this intermediate override until squashed/finalized
b.collections = actual_role.collections
# updated available handlers in play
handlers = actual_role.get_handler_blocks(play=myplay)
for h in handlers:
h._parent = p_block
myplay.handlers = myplay.handlers + handlers
return blocks, handlers
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
ir = IncludeRole(block, role, task_include=task_include).load_data(data, variable_manager=variable_manager, loader=loader)
# dyanmic role!
if ir.action in C._ACTION_INCLUDE_ROLE:
ir.static = False
# Validate options
my_arg_names = frozenset(ir.args.keys())
# name is needed, or use role as alias
ir._role_name = ir.args.get('name', ir.args.get('role'))
if ir._role_name is None:
raise AnsibleParserError("'name' is a required field for %s." % ir.action, obj=data)
# public is only valid argument for includes, imports are always 'public' (after they run)
if 'public' in ir.args and ir.action not in C._ACTION_INCLUDE_ROLE:
raise AnsibleParserError('Invalid options for %s: public' % ir.action, obj=data)
# validate bad args, otherwise we silently ignore
bad_opts = my_arg_names.difference(IncludeRole.VALID_ARGS)
if bad_opts:
raise AnsibleParserError('Invalid options for %s: %s' % (ir.action, ','.join(list(bad_opts))), obj=data)
# build options for role include/import tasks
for key in my_arg_names.intersection(IncludeRole.FROM_ARGS):
from_key = key.removesuffix('_from')
args_value = ir.args.get(key)
if not isinstance(args_value, string_types):
raise AnsibleParserError('Expected a string for %s but got %s instead' % (key, type(args_value)))
ir._from_files[from_key] = basename(args_value)
# apply is only valid for includes, not imports as they inherit directly
apply_attrs = ir.args.get('apply', {})
if apply_attrs and ir.action not in C._ACTION_INCLUDE_ROLE:
raise AnsibleParserError('Invalid options for %s: apply' % ir.action, obj=data)
elif not isinstance(apply_attrs, dict):
raise AnsibleParserError('Expected a dict for apply but got %s instead' % type(apply_attrs), obj=data)
# manual list as otherwise the options would set other task parameters we don't want.
for option in my_arg_names.intersection(IncludeRole.OTHER_ARGS):
setattr(ir, option, ir.args.get(option))
return ir
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(IncludeRole, self).copy(exclude_parent=exclude_parent, exclude_tasks=exclude_tasks)
new_me.statically_loaded = self.statically_loaded
new_me._from_files = self._from_files.copy()
new_me._parent_role = self._parent_role
new_me._role_name = self._role_name
new_me._role_path = self._role_path
return new_me
def get_include_params(self):
v = super(IncludeRole, self).get_include_params()
if self._parent_role:
v |= self._parent_role.get_role_params()
v.setdefault('ansible_parent_role_names', []).insert(0, self._parent_role.get_name())
v.setdefault('ansible_parent_role_paths', []).insert(0, self._parent_role._role_path)
return v
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
test/integration/targets/include_import/roles/role_with_argspec/meta/argument_specs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
test/integration/targets/include_import/roles/role_with_argspec/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(printf "%03d " {1..39}); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
ansible-playbook playbook/test_import_playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
# https://github.com/ansible/ansible/issues/68515
ansible-playbook -v role/test_include_role_vars_from.yml 2>&1 | tee test_include_role_vars_from.out
test "$(grep -E -c 'Expected a string for vars_from but got' test_include_role_vars_from.out)" = 1
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion_fqcn.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance_fqcn.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
ANSIBLE_STRATEGY='linear' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
# https://github.com/ansible/ansible/issues/66764
ANSIBLE_HOST_PATTERN_MISMATCH=error ansible-playbook empty_group_warning/playbook.yml
ansible-playbook test_include_loop.yml "$@"
ansible-playbook test_include_loop_fqcn.yml "$@"
ansible-playbook include_role_omit/playbook.yml "$@"
# Test templating import_playbook, import_tasks, and import_role files
ansible-playbook playbook/test_templated_filenames.yml -e "pb=validate_templated_playbook.yml tasks=validate_templated_tasks.yml tasks_from=templated.yml" "$@" | tee out.txt
cat out.txt
test "$(grep out.txt -ce 'In imported playbook')" = 2
test "$(grep out.txt -ce 'In imported tasks')" = 3
test "$(grep out.txt -ce 'In imported role')" = 3
# https://github.com/ansible/ansible/issues/73657
ansible-playbook issue73657.yml 2>&1 | tee issue73657.out
test "$(grep -c 'SHOULD_NOT_EXECUTE' issue73657.out)" = 0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,304 |
import_role: rolespec_validate cannot be set dynamically
|
### Summary
Because of slowness i do not want to run argument spec validation each run, as it really helpful during inventory creation.
Also automatically inserted validation cannot e skipped by tags, so it's similar to `tags: always`.
Hopefully there is `rolespec_validate` option, which can turn off behaviour. Trouble that it cannot be set with `module_defaults` nor with any variable/fact.
Also it's not possible to control that with ansible.cfg.
### Issue Type
Bug Report
### Component Name
import_role
### Ansible Version
```console
ansible [core 2.14.3]
config file = /hidden/ansible.cfg
configured module search path = ['/hidden/plugins/modules']
ansible python module location = /home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/lib/python3.10/site-packages/ansible
python version = 3.10.7 (main, Mar 10 2023, 10:47:39) [GCC 12.2.0] (/home/vooon/.cache/pypoetry/virtualenvs/test-3I0hE4B9-py3.10/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /hidden/ansible.cfg
DEFAULT_ACTION_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/action']
DEFAULT_FILTER_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/filter']
DEFAULT_FORCE_HANDLERS(/hidden/ansible.cfg) = True
DEFAULT_HOST_LIST(/hidden/ansible.cfg) = ['/hidden/hosts']
DEFAULT_MODULE_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/modules']
DEFAULT_MODULE_UTILS_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/module_utils']
DEFAULT_TEST_PLUGIN_PATH(/hidden/ansible.cfg) = ['/hidden/plugins/test_plugins']
INVENTORY_ENABLED(/hidden/ansible.cfg) = ['yaml', 'openstack', 'host_list', 'script', 'ini', 'auto']
```
### OS / Environment
Ubuntu 22.10, CentOS 8 Stream, doesn't matter.
### Steps to Reproduce
Example playbook:
```yaml
- name: Some play
module_defaults:
ansible.builtin.import_role:
rolespec_validate: false # doesn't work
```
Or example task:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: "{{ validate|default(false)|bool }}" # doesn't work too. it's non-empty string, so for conditions it's True
```
The only working is that:
```yaml
- name: Some role import
ansible.builtin.import_role:
name: lib-something
rolespec_validate: false
```
### Expected Results
I expect not to see `lib-something : Validating arguments against arg spec 'main' -- Test` unless i run with `-e validate=1`.
### Actual Results
```console
Not relevant. Bug must be around that line:
https://github.com/ansible/ansible/blob/b3986131207266e682029f361e6c7daa87e1d7eb/lib/ansible/playbook/role_include.py#L164-L165
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80304
|
https://github.com/ansible/ansible/pull/80320
|
a45dd2a01c9b254bfa10fbd119f8ea99cf881992
|
666188892ed0833e87803a3e80c58923e4cd6bca
| 2023-03-25T17:14:39Z |
python
| 2023-03-30T22:20:10Z |
test/integration/targets/include_import/tasks/test_templating_IncludeRole_FA.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,492 |
Passing the value 'false' to run_once is invalid in loop task
|
### Summary
π Here is some description of my issue: https://github.com/kubernetes-sigs/kubespray/issues/9126#issuecomment-1210807498
when `run_once` and `loop` appear in the task at the same time, the value of `run_once` is a variable and the variable is `false`, and the expected result of the task does not take effect at this time.
### Issue Type
Bug Report
### Component Name
ansible-playbook,loop,run_once,register
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /root/workspaces/kubespray/ansible.cfg
configured module search path = ['/root/workspaces/kubespray/library']
ansible python module location = /root/workspaces/kubespray/kubespray-venv/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /root/workspaces/kubespray/kubespray-venv/bin/ansible
python version = 3.9.10 (main, Aug 9 2022, 02:24:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/root/workspaces/kubespray/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/workspaces/kubespray/ansible.cfg) = /tmp
CACHE_PLUGIN_TIMEOUT(/root/workspaces/kubespray/ansible.cfg) = 86400
CALLBACKS_ENABLED(/root/workspaces/kubespray/ansible.cfg) = ['profile_tasks', 'ara_default']
DEFAULT_GATHERING(/root/workspaces/kubespray/ansible.cfg) = smart
DEFAULT_MODULE_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/library']
DEFAULT_ROLES_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/roles', '/root/workspaces/kubespray/kubespray-venv/usr/local/share/k
DEFAULT_STDOUT_CALLBACK(/root/workspaces/kubespray/ansible.cfg) = default
DEPRECATION_WARNINGS(/root/workspaces/kubespray/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/root/workspaces/kubespray/ansible.cfg) = False
HOST_KEY_CHECKING(/root/workspaces/kubespray/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/root/workspaces/kubespray/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo', '.creds', '.gpg']
INVENTORY_IGNORE_PATTERNS(/root/workspaces/kubespray/ansible.cfg) = ['artifacts', 'credentials']
TRANSFORM_INVALID_GROUP_CHARS(/root/workspaces/kubespray/ansible.cfg) = ignore
BECOME:
======
CACHE:
=====
jsonfile:
________
_timeout(/root/workspaces/kubespray/ansible.cfg) = 86400
_uri(/root/workspaces/kubespray/ansible.cfg) = /tmp
CALLBACK:
========
default:
_______
display_skipped_hosts(/root/workspaces/kubespray/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
ssh:
___
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
pipelining(/root/workspaces/kubespray/ansible.cfg) = True
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
| node | arch | os |
| -------- | -------- | -------- |
| master1 | x86_64 | CentOS Linux 7 (Core) |
| worker1 | aarch64 | Kylin Linux Advanced Server V10 (Tercel) |
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: test run_once
hosts: k8s_cluster
gather_facts: False
tags: always
vars:
test_flag: false
os_release_path: /etc/os-release
tasks:
- name: Fetch /etc/os-release
run_once: "{{ test_flag | bool }}"
raw: "cat {{ path_var }}"
register: os_release
loop: "{{ [os_release_path] }}"
loop_control:
loop_var: path_var
- name: Debug print var
debug:
msg: ">> os_release: {{ os_release }}"
```
### Expected Results
```
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [worker1]
changed: [master1]
Wednesday 10 August 2022 10:49:58 -0400 (0:00:00.406) 0:00:00.457 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"CentOS Linux\"\\r\\nVERSION=\"7 (Core)\"\\r\\nID=\"centos\"\\r\\nID_LIKE=\"rhel fedora\"\\r\\nVERSION_ID=\"7\"\\r\\nPRETTY_NAME=\"CentOS Linux 7 (Core)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\nCPE_NAME=\"cpe:/o:centos:centos:7\"\\r\\nHOME_URL=\"https://www.centos.org/\"\\r\\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\\r\\n\\r\\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\\r\\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\\r\\nREDHAT_SUPPORT_PRODUCT=\"centos\"\\r\\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"CentOS Linux\"', 'VERSION=\"7 (Core)\"', 'ID=\"centos\"', 'ID_LIKE=\"rhel fedora\"', 'VERSION_ID=\"7\"', 'PRETTY_NAME=\"CentOS Linux 7 (Core)\"', 'ANSI_COLOR=\"0;31\"', 'CPE_NAME=\"cpe:/o:centos:centos:7\"', 'HOME_URL=\"https://www.centos.org/\"', 'BUG_REPORT_URL=\"https://bugs.centos.org/\"', '', 'CENTOS_MANTISBT_PROJECT=\"CentOS-7\"', 'CENTOS_MANTISBT_PROJECT_VERSION=\"7\"', 'REDHAT_SUPPORT_PRODUCT=\"centos\"', 'REDHAT_SUPPORT_PRODUCT_VERSION=\"7\"', ''], 'stderr': 'Shared connection to 10.6.170.20 closed.\\r\\n', 'stderr_lines': ['Shared connection to 10.6.170.20 closed.'], 'changed': True, 'failed': False}"
}
ok: [worker1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False}"
}
```
### Actual Results
```console
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [master1] => (item=/etc/os-release)
changed: [worker1] => (item=/etc/os-release)
Wednesday 10 August 2022 10:47:57 -0400 (0:00:00.372) 0:00:00.429 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
ok: [worker1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78492
|
https://github.com/ansible/ansible/pull/80051
|
0e509ecf2572aab5f277a13284e29d6c68d596ab
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
| 2022-08-10T15:23:19Z |
python
| 2023-03-31T15:36:44Z |
changelogs/fragments/78492-fix-invalid-run_once-value.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,492 |
Passing the value 'false' to run_once is invalid in loop task
|
### Summary
π Here is some description of my issue: https://github.com/kubernetes-sigs/kubespray/issues/9126#issuecomment-1210807498
when `run_once` and `loop` appear in the task at the same time, the value of `run_once` is a variable and the variable is `false`, and the expected result of the task does not take effect at this time.
### Issue Type
Bug Report
### Component Name
ansible-playbook,loop,run_once,register
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /root/workspaces/kubespray/ansible.cfg
configured module search path = ['/root/workspaces/kubespray/library']
ansible python module location = /root/workspaces/kubespray/kubespray-venv/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /root/workspaces/kubespray/kubespray-venv/bin/ansible
python version = 3.9.10 (main, Aug 9 2022, 02:24:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/root/workspaces/kubespray/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/workspaces/kubespray/ansible.cfg) = /tmp
CACHE_PLUGIN_TIMEOUT(/root/workspaces/kubespray/ansible.cfg) = 86400
CALLBACKS_ENABLED(/root/workspaces/kubespray/ansible.cfg) = ['profile_tasks', 'ara_default']
DEFAULT_GATHERING(/root/workspaces/kubespray/ansible.cfg) = smart
DEFAULT_MODULE_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/library']
DEFAULT_ROLES_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/roles', '/root/workspaces/kubespray/kubespray-venv/usr/local/share/k
DEFAULT_STDOUT_CALLBACK(/root/workspaces/kubespray/ansible.cfg) = default
DEPRECATION_WARNINGS(/root/workspaces/kubespray/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/root/workspaces/kubespray/ansible.cfg) = False
HOST_KEY_CHECKING(/root/workspaces/kubespray/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/root/workspaces/kubespray/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo', '.creds', '.gpg']
INVENTORY_IGNORE_PATTERNS(/root/workspaces/kubespray/ansible.cfg) = ['artifacts', 'credentials']
TRANSFORM_INVALID_GROUP_CHARS(/root/workspaces/kubespray/ansible.cfg) = ignore
BECOME:
======
CACHE:
=====
jsonfile:
________
_timeout(/root/workspaces/kubespray/ansible.cfg) = 86400
_uri(/root/workspaces/kubespray/ansible.cfg) = /tmp
CALLBACK:
========
default:
_______
display_skipped_hosts(/root/workspaces/kubespray/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
ssh:
___
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
pipelining(/root/workspaces/kubespray/ansible.cfg) = True
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
| node | arch | os |
| -------- | -------- | -------- |
| master1 | x86_64 | CentOS Linux 7 (Core) |
| worker1 | aarch64 | Kylin Linux Advanced Server V10 (Tercel) |
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: test run_once
hosts: k8s_cluster
gather_facts: False
tags: always
vars:
test_flag: false
os_release_path: /etc/os-release
tasks:
- name: Fetch /etc/os-release
run_once: "{{ test_flag | bool }}"
raw: "cat {{ path_var }}"
register: os_release
loop: "{{ [os_release_path] }}"
loop_control:
loop_var: path_var
- name: Debug print var
debug:
msg: ">> os_release: {{ os_release }}"
```
### Expected Results
```
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [worker1]
changed: [master1]
Wednesday 10 August 2022 10:49:58 -0400 (0:00:00.406) 0:00:00.457 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"CentOS Linux\"\\r\\nVERSION=\"7 (Core)\"\\r\\nID=\"centos\"\\r\\nID_LIKE=\"rhel fedora\"\\r\\nVERSION_ID=\"7\"\\r\\nPRETTY_NAME=\"CentOS Linux 7 (Core)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\nCPE_NAME=\"cpe:/o:centos:centos:7\"\\r\\nHOME_URL=\"https://www.centos.org/\"\\r\\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\\r\\n\\r\\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\\r\\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\\r\\nREDHAT_SUPPORT_PRODUCT=\"centos\"\\r\\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"CentOS Linux\"', 'VERSION=\"7 (Core)\"', 'ID=\"centos\"', 'ID_LIKE=\"rhel fedora\"', 'VERSION_ID=\"7\"', 'PRETTY_NAME=\"CentOS Linux 7 (Core)\"', 'ANSI_COLOR=\"0;31\"', 'CPE_NAME=\"cpe:/o:centos:centos:7\"', 'HOME_URL=\"https://www.centos.org/\"', 'BUG_REPORT_URL=\"https://bugs.centos.org/\"', '', 'CENTOS_MANTISBT_PROJECT=\"CentOS-7\"', 'CENTOS_MANTISBT_PROJECT_VERSION=\"7\"', 'REDHAT_SUPPORT_PRODUCT=\"centos\"', 'REDHAT_SUPPORT_PRODUCT_VERSION=\"7\"', ''], 'stderr': 'Shared connection to 10.6.170.20 closed.\\r\\n', 'stderr_lines': ['Shared connection to 10.6.170.20 closed.'], 'changed': True, 'failed': False}"
}
ok: [worker1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False}"
}
```
### Actual Results
```console
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [master1] => (item=/etc/os-release)
changed: [worker1] => (item=/etc/os-release)
Wednesday 10 August 2022 10:47:57 -0400 (0:00:00.372) 0:00:00.429 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
ok: [worker1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78492
|
https://github.com/ansible/ansible/pull/80051
|
0e509ecf2572aab5f277a13284e29d6c68d596ab
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
| 2022-08-10T15:23:19Z |
python
| 2023-03-31T15:36:44Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils._text import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, _ in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action in C._ACTION_META and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role:
role_obj = self._get_cached_role(task, iterator._play)
if role_obj.has_run(host) and role_obj._metadata.allow_duplicates is False:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, _) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,492 |
Passing the value 'false' to run_once is invalid in loop task
|
### Summary
π Here is some description of my issue: https://github.com/kubernetes-sigs/kubespray/issues/9126#issuecomment-1210807498
when `run_once` and `loop` appear in the task at the same time, the value of `run_once` is a variable and the variable is `false`, and the expected result of the task does not take effect at this time.
### Issue Type
Bug Report
### Component Name
ansible-playbook,loop,run_once,register
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /root/workspaces/kubespray/ansible.cfg
configured module search path = ['/root/workspaces/kubespray/library']
ansible python module location = /root/workspaces/kubespray/kubespray-venv/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /root/workspaces/kubespray/kubespray-venv/bin/ansible
python version = 3.9.10 (main, Aug 9 2022, 02:24:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/root/workspaces/kubespray/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/workspaces/kubespray/ansible.cfg) = /tmp
CACHE_PLUGIN_TIMEOUT(/root/workspaces/kubespray/ansible.cfg) = 86400
CALLBACKS_ENABLED(/root/workspaces/kubespray/ansible.cfg) = ['profile_tasks', 'ara_default']
DEFAULT_GATHERING(/root/workspaces/kubespray/ansible.cfg) = smart
DEFAULT_MODULE_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/library']
DEFAULT_ROLES_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/roles', '/root/workspaces/kubespray/kubespray-venv/usr/local/share/k
DEFAULT_STDOUT_CALLBACK(/root/workspaces/kubespray/ansible.cfg) = default
DEPRECATION_WARNINGS(/root/workspaces/kubespray/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/root/workspaces/kubespray/ansible.cfg) = False
HOST_KEY_CHECKING(/root/workspaces/kubespray/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/root/workspaces/kubespray/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo', '.creds', '.gpg']
INVENTORY_IGNORE_PATTERNS(/root/workspaces/kubespray/ansible.cfg) = ['artifacts', 'credentials']
TRANSFORM_INVALID_GROUP_CHARS(/root/workspaces/kubespray/ansible.cfg) = ignore
BECOME:
======
CACHE:
=====
jsonfile:
________
_timeout(/root/workspaces/kubespray/ansible.cfg) = 86400
_uri(/root/workspaces/kubespray/ansible.cfg) = /tmp
CALLBACK:
========
default:
_______
display_skipped_hosts(/root/workspaces/kubespray/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
ssh:
___
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
pipelining(/root/workspaces/kubespray/ansible.cfg) = True
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
| node | arch | os |
| -------- | -------- | -------- |
| master1 | x86_64 | CentOS Linux 7 (Core) |
| worker1 | aarch64 | Kylin Linux Advanced Server V10 (Tercel) |
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: test run_once
hosts: k8s_cluster
gather_facts: False
tags: always
vars:
test_flag: false
os_release_path: /etc/os-release
tasks:
- name: Fetch /etc/os-release
run_once: "{{ test_flag | bool }}"
raw: "cat {{ path_var }}"
register: os_release
loop: "{{ [os_release_path] }}"
loop_control:
loop_var: path_var
- name: Debug print var
debug:
msg: ">> os_release: {{ os_release }}"
```
### Expected Results
```
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [worker1]
changed: [master1]
Wednesday 10 August 2022 10:49:58 -0400 (0:00:00.406) 0:00:00.457 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"CentOS Linux\"\\r\\nVERSION=\"7 (Core)\"\\r\\nID=\"centos\"\\r\\nID_LIKE=\"rhel fedora\"\\r\\nVERSION_ID=\"7\"\\r\\nPRETTY_NAME=\"CentOS Linux 7 (Core)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\nCPE_NAME=\"cpe:/o:centos:centos:7\"\\r\\nHOME_URL=\"https://www.centos.org/\"\\r\\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\\r\\n\\r\\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\\r\\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\\r\\nREDHAT_SUPPORT_PRODUCT=\"centos\"\\r\\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"CentOS Linux\"', 'VERSION=\"7 (Core)\"', 'ID=\"centos\"', 'ID_LIKE=\"rhel fedora\"', 'VERSION_ID=\"7\"', 'PRETTY_NAME=\"CentOS Linux 7 (Core)\"', 'ANSI_COLOR=\"0;31\"', 'CPE_NAME=\"cpe:/o:centos:centos:7\"', 'HOME_URL=\"https://www.centos.org/\"', 'BUG_REPORT_URL=\"https://bugs.centos.org/\"', '', 'CENTOS_MANTISBT_PROJECT=\"CentOS-7\"', 'CENTOS_MANTISBT_PROJECT_VERSION=\"7\"', 'REDHAT_SUPPORT_PRODUCT=\"centos\"', 'REDHAT_SUPPORT_PRODUCT_VERSION=\"7\"', ''], 'stderr': 'Shared connection to 10.6.170.20 closed.\\r\\n', 'stderr_lines': ['Shared connection to 10.6.170.20 closed.'], 'changed': True, 'failed': False}"
}
ok: [worker1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False}"
}
```
### Actual Results
```console
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [master1] => (item=/etc/os-release)
changed: [worker1] => (item=/etc/os-release)
Wednesday 10 August 2022 10:47:57 -0400 (0:00:00.372) 0:00:00.429 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
ok: [worker1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78492
|
https://github.com/ansible/ansible/pull/80051
|
0e509ecf2572aab5f277a13284e29d6c68d596ab
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
| 2022-08-10T15:23:19Z |
python
| 2023-03-31T15:36:44Z |
test/integration/targets/strategy_linear/runme.sh
|
#!/usr/bin/env bash
set -eux
ansible-playbook test_include_file_noop.yml -i inventory "$@"
ansible-playbook task_action_templating.yml -i inventory "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,492 |
Passing the value 'false' to run_once is invalid in loop task
|
### Summary
π Here is some description of my issue: https://github.com/kubernetes-sigs/kubespray/issues/9126#issuecomment-1210807498
when `run_once` and `loop` appear in the task at the same time, the value of `run_once` is a variable and the variable is `false`, and the expected result of the task does not take effect at this time.
### Issue Type
Bug Report
### Component Name
ansible-playbook,loop,run_once,register
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.5]
config file = /root/workspaces/kubespray/ansible.cfg
configured module search path = ['/root/workspaces/kubespray/library']
ansible python module location = /root/workspaces/kubespray/kubespray-venv/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /root/workspaces/kubespray/kubespray-venv/bin/ansible
python version = 3.9.10 (main, Aug 9 2022, 02:24:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/root/workspaces/kubespray/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/workspaces/kubespray/ansible.cfg) = /tmp
CACHE_PLUGIN_TIMEOUT(/root/workspaces/kubespray/ansible.cfg) = 86400
CALLBACKS_ENABLED(/root/workspaces/kubespray/ansible.cfg) = ['profile_tasks', 'ara_default']
DEFAULT_GATHERING(/root/workspaces/kubespray/ansible.cfg) = smart
DEFAULT_MODULE_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/library']
DEFAULT_ROLES_PATH(/root/workspaces/kubespray/ansible.cfg) = ['/root/workspaces/kubespray/roles', '/root/workspaces/kubespray/kubespray-venv/usr/local/share/k
DEFAULT_STDOUT_CALLBACK(/root/workspaces/kubespray/ansible.cfg) = default
DEPRECATION_WARNINGS(/root/workspaces/kubespray/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/root/workspaces/kubespray/ansible.cfg) = False
HOST_KEY_CHECKING(/root/workspaces/kubespray/ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/root/workspaces/kubespray/ansible.cfg) = ['~', '.orig', '.bak', '.ini', '.cfg', '.retry', '.pyc', '.pyo', '.creds', '.gpg']
INVENTORY_IGNORE_PATTERNS(/root/workspaces/kubespray/ansible.cfg) = ['artifacts', 'credentials']
TRANSFORM_INVALID_GROUP_CHARS(/root/workspaces/kubespray/ansible.cfg) = ignore
BECOME:
======
CACHE:
=====
jsonfile:
________
_timeout(/root/workspaces/kubespray/ansible.cfg) = 86400
_uri(/root/workspaces/kubespray/ansible.cfg) = /tmp
CALLBACK:
========
default:
_______
display_skipped_hosts(/root/workspaces/kubespray/ansible.cfg) = False
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
ssh:
___
host_key_checking(/root/workspaces/kubespray/ansible.cfg) = False
pipelining(/root/workspaces/kubespray/ansible.cfg) = True
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
| node | arch | os |
| -------- | -------- | -------- |
| master1 | x86_64 | CentOS Linux 7 (Core) |
| worker1 | aarch64 | Kylin Linux Advanced Server V10 (Tercel) |
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: test run_once
hosts: k8s_cluster
gather_facts: False
tags: always
vars:
test_flag: false
os_release_path: /etc/os-release
tasks:
- name: Fetch /etc/os-release
run_once: "{{ test_flag | bool }}"
raw: "cat {{ path_var }}"
register: os_release
loop: "{{ [os_release_path] }}"
loop_control:
loop_var: path_var
- name: Debug print var
debug:
msg: ">> os_release: {{ os_release }}"
```
### Expected Results
```
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [worker1]
changed: [master1]
Wednesday 10 August 2022 10:49:58 -0400 (0:00:00.406) 0:00:00.457 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"CentOS Linux\"\\r\\nVERSION=\"7 (Core)\"\\r\\nID=\"centos\"\\r\\nID_LIKE=\"rhel fedora\"\\r\\nVERSION_ID=\"7\"\\r\\nPRETTY_NAME=\"CentOS Linux 7 (Core)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\nCPE_NAME=\"cpe:/o:centos:centos:7\"\\r\\nHOME_URL=\"https://www.centos.org/\"\\r\\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\\r\\n\\r\\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\\r\\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\\r\\nREDHAT_SUPPORT_PRODUCT=\"centos\"\\r\\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"CentOS Linux\"', 'VERSION=\"7 (Core)\"', 'ID=\"centos\"', 'ID_LIKE=\"rhel fedora\"', 'VERSION_ID=\"7\"', 'PRETTY_NAME=\"CentOS Linux 7 (Core)\"', 'ANSI_COLOR=\"0;31\"', 'CPE_NAME=\"cpe:/o:centos:centos:7\"', 'HOME_URL=\"https://www.centos.org/\"', 'BUG_REPORT_URL=\"https://bugs.centos.org/\"', '', 'CENTOS_MANTISBT_PROJECT=\"CentOS-7\"', 'CENTOS_MANTISBT_PROJECT_VERSION=\"7\"', 'REDHAT_SUPPORT_PRODUCT=\"centos\"', 'REDHAT_SUPPORT_PRODUCT_VERSION=\"7\"', ''], 'stderr': 'Shared connection to 10.6.170.20 closed.\\r\\n', 'stderr_lines': ['Shared connection to 10.6.170.20 closed.'], 'changed': True, 'failed': False}"
}
ok: [worker1] => {
"msg": ">> os_release: {'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False}"
}
```
### Actual Results
```console
TASK [Fetch /etc/os-release] *********************************************************************************************************************************
changed: [master1] => (item=/etc/os-release)
changed: [worker1] => (item=/etc/os-release)
Wednesday 10 August 2022 10:47:57 -0400 (0:00:00.372) 0:00:00.429 ******
TASK [Debug print var] ***************************************************************************************************************************************
ok: [master1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
ok: [worker1] => {
"msg": ">> os_release: {'results': [{'rc': 0, 'stdout': 'NAME=\"Kylin Linux Advanced Server\"\\r\\nVERSION=\"V10 (Tercel)\"\\r\\nID=\"kylin\"\\r\\nVERSION_ID=\"V10\"\\r\\nPRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"\\r\\nANSI_COLOR=\"0;31\"\\r\\n\\r\\n', 'stdout_lines': ['NAME=\"Kylin Linux Advanced Server\"', 'VERSION=\"V10 (Tercel)\"', 'ID=\"kylin\"', 'VERSION_ID=\"V10\"', 'PRETTY_NAME=\"Kylin Linux Advanced Server V10 (Tercel)\"', 'ANSI_COLOR=\"0;31\"', ''], 'stderr': '\\nAuthorized users only. All activities may be monitored and reported.\\nShared connection to 172.30.40.199 closed.\\r\\n', 'stderr_lines': ['', 'Authorized users only. All activities may be monitored and reported.', 'Shared connection to 172.30.40.199 closed.'], 'changed': True, 'failed': False, 'path_var': '/etc/os-release', 'ansible_loop_var': 'path_var'}], 'skipped': False, 'changed': True, 'msg': 'All items completed'}"
}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78492
|
https://github.com/ansible/ansible/pull/80051
|
0e509ecf2572aab5f277a13284e29d6c68d596ab
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
| 2022-08-10T15:23:19Z |
python
| 2023-03-31T15:36:44Z |
test/integration/targets/strategy_linear/task_templated_run_once.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,303 |
ansible-playbook --list-tags option does not show any tag
|
### Summary
trying to list tags from a playbook with --list-tags option show empty lists for "all" TAGS nor for "task" TAGS
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/stannum/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/stannum/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-playbook -i inventory.yaml test.yaml --list-tags
yaml file below:
---
- hosts: all
tasks:
- name: Configuring iscsi directory structure
tags:
- never
- raidix_host
when: inventory_hostname in groups['closed_circuit_kvm_hosts'] and item.state == 'directory'
become: yes
ansible.builtin.file:
path: /etc/iscsi/nodes/{{ item.path }}
state: directory
owner: root
group: root
mode: u=rw,g=,o=
with_community.general.filetree: files/etc/iscsi/nodes/
notify:
- restart iscsi service
- name: Configuring iscsi service
tags:
- never
- raidix_host
when: inventory_hostname in groups['closed_circuit_kvm_hosts'] and item.state == 'file'
become: yes
ansible.builtin.copy:
src: '{{ item.src }}'
dest: /etc/iscsi/nodes/{{ item.path }}
owner: root
group: root
mode: u=rw,g=,o=
with_community.general.filetree: files/etc/iscsi/nodes/
notify:
- restart iscsi service
handlers:
- name: Restarting multipathd
become: yes
ansible.builtin.service:
name: multipathd
enabled: yes
state: restarted
listen: restart multipath service
- name: Restarting iscsi service
become: yes
ansible.builtin.service:
name: iscsi
enabled: yes
state: restarted
notify: restart multipath service
listen: restart iscsi service
```
### Expected Results
Expecting to see list of tags in square braces
```
ansible-playbook -i inventory.yaml test.yaml --list-tags
playbook: test.yaml
play #1 (all): all TAGS: ["never", "raidix_host"]
TASK TAGS: ["never", "raidix_host"]
```
### Actual Results
```console
$ ansible-playbook -i inventory.yaml test.yaml --list-tags
playbook: test.yaml
play #1 (all): all TAGS: []
TASK TAGS: []
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80303
|
https://github.com/ansible/ansible/pull/80309
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
|
4b20191c52721930965ad96e9acca02f0227bc96
| 2023-03-25T14:36:42Z |
python
| 2023-03-31T15:37:17Z |
changelogs/fragments/listalltags.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,303 |
ansible-playbook --list-tags option does not show any tag
|
### Summary
trying to list tags from a playbook with --list-tags option show empty lists for "all" TAGS nor for "task" TAGS
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.3]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/stannum/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/stannum/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-playbook -i inventory.yaml test.yaml --list-tags
yaml file below:
---
- hosts: all
tasks:
- name: Configuring iscsi directory structure
tags:
- never
- raidix_host
when: inventory_hostname in groups['closed_circuit_kvm_hosts'] and item.state == 'directory'
become: yes
ansible.builtin.file:
path: /etc/iscsi/nodes/{{ item.path }}
state: directory
owner: root
group: root
mode: u=rw,g=,o=
with_community.general.filetree: files/etc/iscsi/nodes/
notify:
- restart iscsi service
- name: Configuring iscsi service
tags:
- never
- raidix_host
when: inventory_hostname in groups['closed_circuit_kvm_hosts'] and item.state == 'file'
become: yes
ansible.builtin.copy:
src: '{{ item.src }}'
dest: /etc/iscsi/nodes/{{ item.path }}
owner: root
group: root
mode: u=rw,g=,o=
with_community.general.filetree: files/etc/iscsi/nodes/
notify:
- restart iscsi service
handlers:
- name: Restarting multipathd
become: yes
ansible.builtin.service:
name: multipathd
enabled: yes
state: restarted
listen: restart multipath service
- name: Restarting iscsi service
become: yes
ansible.builtin.service:
name: iscsi
enabled: yes
state: restarted
notify: restart multipath service
listen: restart iscsi service
```
### Expected Results
Expecting to see list of tags in square braces
```
ansible-playbook -i inventory.yaml test.yaml --list-tags
playbook: test.yaml
play #1 (all): all TAGS: ["never", "raidix_host"]
TASK TAGS: ["never", "raidix_host"]
```
### Actual Results
```console
$ ansible-playbook -i inventory.yaml test.yaml --list-tags
playbook: test.yaml
play #1 (all): all TAGS: []
TASK TAGS: []
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80303
|
https://github.com/ansible/ansible/pull/80309
|
043a0f3ee81c6a56b025f4c2f3e939c5d621fba8
|
4b20191c52721930965ad96e9acca02f0227bc96
| 2023-03-25T14:36:42Z |
python
| 2023-03-31T15:37:17Z |
lib/ansible/cli/playbook.py
|
#!/usr/bin/env python
# (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import os
import stat
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.module_utils._text import to_bytes
from ansible.playbook.block import Block
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.utils.display import Display
display = Display()
class PlaybookCLI(CLI):
''' the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system.
See the project home page (https://docs.ansible.com) for more information. '''
name = 'ansible-playbook'
def init_parser(self):
# create parser for CLI options
super(PlaybookCLI, self).init_parser(
usage="%prog [options] playbook.yml [playbook2 ...]",
desc="Runs Ansible playbooks, executing the defined tasks on the targeted hosts.")
opt_help.add_connect_options(self.parser)
opt_help.add_meta_options(self.parser)
opt_help.add_runas_options(self.parser)
opt_help.add_subset_options(self.parser)
opt_help.add_check_options(self.parser)
opt_help.add_inventory_options(self.parser)
opt_help.add_runtask_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_fork_options(self.parser)
opt_help.add_module_options(self.parser)
# ansible playbook specific opts
self.parser.add_argument('--list-tasks', dest='listtasks', action='store_true',
help="list all tasks that would be executed")
self.parser.add_argument('--list-tags', dest='listtags', action='store_true',
help="list all available tags")
self.parser.add_argument('--step', dest='step', action='store_true',
help="one-step-at-a-time: confirm each task before running")
self.parser.add_argument('--start-at-task', dest='start_at_task',
help="start the playbook at the task matching this name")
self.parser.add_argument('args', help='Playbook(s)', metavar='playbook', nargs='+')
def post_process_args(self, options):
options = super(PlaybookCLI, self).post_process_args(options)
display.verbosity = options.verbosity
self.validate_conflicts(options, runas_opts=True, fork_opts=True)
return options
def run(self):
super(PlaybookCLI, self).run()
# Note: slightly wrong, this is written so that implicit localhost
# manages passwords
sshpass = None
becomepass = None
passwords = {}
# initial error check, to make sure all specified playbooks are accessible
# before we start running anything through the playbook executor
# also prep plugin paths
b_playbook_dirs = []
for playbook in context.CLIARGS['args']:
# resolve if it is collection playbook with FQCN notation, if not, leaves unchanged
resource = _get_collection_playbook_path(playbook)
if resource is not None:
playbook_collection = resource[2]
else:
# not an FQCN so must be a file
if not os.path.exists(playbook):
raise AnsibleError("the playbook: %s could not be found" % playbook)
if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)):
raise AnsibleError("the playbook: %s does not appear to be a file" % playbook)
# check if playbook is from collection (path can be passed directly)
playbook_collection = _get_collection_name_from_path(playbook)
# don't add collection playbooks to adjacency search path
if not playbook_collection:
# setup dirs to enable loading plugins from all playbooks in case they add callbacks/inventory/etc
b_playbook_dir = os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict')))
add_all_plugin_dirs(b_playbook_dir)
b_playbook_dirs.append(b_playbook_dir)
if b_playbook_dirs:
# allow collections adjacent to these playbooks
# we use list copy to avoid opening up 'adjacency' in the previous loop
AnsibleCollectionConfig.playbook_paths = b_playbook_dirs
# don't deal with privilege escalation or passwords when we don't need to
if not (context.CLIARGS['listhosts'] or context.CLIARGS['listtasks'] or
context.CLIARGS['listtags'] or context.CLIARGS['syntax']):
(sshpass, becomepass) = self.ask_passwords()
passwords = {'conn_pass': sshpass, 'become_pass': becomepass}
# create base objects
loader, inventory, variable_manager = self._play_prereqs()
# (which is not returned in list_hosts()) is taken into account for
# warning if inventory is empty. But it can't be taken into account for
# checking if limit doesn't match any hosts. Instead we don't worry about
# limit if only implicit localhost was in inventory to start with.
#
# Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts())
CLI.get_host_list(inventory, context.CLIARGS['subset'])
# flush fact cache if requested
if context.CLIARGS['flush_cache']:
self._flush_cache(inventory, variable_manager)
# create the playbook executor, which manages running the plays via a task queue manager
pbex = PlaybookExecutor(playbooks=context.CLIARGS['args'], inventory=inventory,
variable_manager=variable_manager, loader=loader,
passwords=passwords)
results = pbex.run()
if isinstance(results, list):
for p in results:
display.display('\nplaybook: %s' % p['playbook'])
for idx, play in enumerate(p['plays']):
if play._included_path is not None:
loader.set_basedir(play._included_path)
else:
pb_dir = os.path.realpath(os.path.dirname(p['playbook']))
loader.set_basedir(pb_dir)
# show host list if we were able to template into a list
try:
host_list = ','.join(play.hosts)
except TypeError:
host_list = ''
msg = "\n play #%d (%s): %s" % (idx + 1, host_list, play.name)
mytags = set(play.tags)
msg += '\tTAGS: [%s]' % (','.join(mytags))
if context.CLIARGS['listhosts']:
playhosts = set(inventory.get_hosts(play.hosts))
msg += "\n pattern: %s\n hosts (%d):" % (play.hosts, len(playhosts))
for host in playhosts:
msg += "\n %s" % host
display.display(msg)
all_tags = set()
if context.CLIARGS['listtags'] or context.CLIARGS['listtasks']:
taskmsg = ''
if context.CLIARGS['listtasks']:
taskmsg = ' tasks:\n'
def _process_block(b):
taskmsg = ''
for task in b.block:
if isinstance(task, Block):
taskmsg += _process_block(task)
else:
if task.action in C._ACTION_META and task.implicit:
continue
all_tags.update(task.tags)
if context.CLIARGS['listtasks']:
cur_tags = list(mytags.union(set(task.tags)))
cur_tags.sort()
if task.name:
taskmsg += " %s" % task.get_name()
else:
taskmsg += " %s" % task.action
taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags)
return taskmsg
all_vars = variable_manager.get_vars(play=play)
for block in play.compile():
block = block.filter_tagged_tasks(all_vars)
if not block.has_tasks():
continue
taskmsg += _process_block(block)
if context.CLIARGS['listtags']:
cur_tags = list(mytags.union(all_tags))
cur_tags.sort()
taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags)
display.display(taskmsg)
return 0
else:
return results
@staticmethod
def _flush_cache(inventory, variable_manager):
for host in inventory.list_hosts():
hostname = host.get_name()
variable_manager.clear_facts(hostname)
def main(args=None):
PlaybookCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,844 |
gather_timeout is not taken in effect for ansible.builtin.setup in get_mount_facts
|
### Summary
With a simple playbook:
```yaml
---
- hosts: all
gather_facts: false
tasks:
- name: Test setup
ansible.builtin.setup:
gather_subset:
- mounts
gather_timeout: 600
```
```gather_timeout``` is never taken in effect in the function get_mount_facts in ansible/module_utils/facts/hardware/linux.py.
The problem seem to point to this line:
```python
maxtime = globals().get('GATHER_TIMEOUT') or timeout.DEFAULT_GATHER_TIMEOUT
```
```globals().get('GATHER_TIMEOUT')``` is always ```None``` because because ```GATHER_TIMEOUT``` is set in the timeout module ansible/module_utils/facts/collector.py therefore the global is ```timeout.GATHER_TIMEOUT``` and not ```GATHER_TIMEOUT```.
Attached are two screenshot from a vscode debug session of the AnsiballZ_setup.py from the playbook above.
<img width="949" alt="ansible_setup_timeout_linux" src="https://user-images.githubusercontent.com/1870021/215522822-fd1f8541-907d-4f1f-9dba-c8cec299514e.png">
<img width="1008" alt="ansible_setup_timeout_collector" src="https://user-images.githubusercontent.com/1870021/215522828-a9e3ea51-0da3-4497-aee6-c58ce7f7c801.png">
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /home/dsg4269/.ansible.cfg
configured module search path = ['/home/dsg4269/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dsg4269/dev/git/test_timeout_fact_mount/.venv/lib64/python3.11/site-packages/ansible
ansible collection location = /home/dsg4269/.ansible/collections:/usr/share/ansible/collections
executable location = /home/dsg4269/dev/git/test_timeout_fact_mount/.venv/bin/ansible
python version = 3.11.1 (main, Jan 6 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/home/dsg4269/dev/git/test_timeout_fact_mount/.venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Reproduce on RHEL 7.9 and WSL Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- name: Test setup
ansible.builtin.setup:
gather_subset:
- mounts
gather_timeout: 600
```
### Expected Results
I expected the gather_timeout to be taken in effect in the setup module of the subset mounts.
### Actual Results
```console
Facts have a timeout on system with a lot of filesystem:
{
"device": "/dev/mapper/VolGroup00-lv_varlog",
"fstype": "xfs",
"mount": "/var/log",
"note": "Timed out while attempting to get extra information.",
"options": "rw,seclabel,nodev,relatime,attr2,inode64,noquota"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79844
|
https://github.com/ansible/ansible/pull/79847
|
5e131a96c086eda58436429a417c8e7cf256602b
|
c1e19e4bddd13425aba45733372f2a676506256c
| 2023-01-30T15:49:46Z |
python
| 2023-04-04T15:02:41Z |
changelogs/fragments/79844-fix-timeout-mounts-linux.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,844 |
gather_timeout is not taken in effect for ansible.builtin.setup in get_mount_facts
|
### Summary
With a simple playbook:
```yaml
---
- hosts: all
gather_facts: false
tasks:
- name: Test setup
ansible.builtin.setup:
gather_subset:
- mounts
gather_timeout: 600
```
```gather_timeout``` is never taken in effect in the function get_mount_facts in ansible/module_utils/facts/hardware/linux.py.
The problem seem to point to this line:
```python
maxtime = globals().get('GATHER_TIMEOUT') or timeout.DEFAULT_GATHER_TIMEOUT
```
```globals().get('GATHER_TIMEOUT')``` is always ```None``` because because ```GATHER_TIMEOUT``` is set in the timeout module ansible/module_utils/facts/collector.py therefore the global is ```timeout.GATHER_TIMEOUT``` and not ```GATHER_TIMEOUT```.
Attached are two screenshot from a vscode debug session of the AnsiballZ_setup.py from the playbook above.
<img width="949" alt="ansible_setup_timeout_linux" src="https://user-images.githubusercontent.com/1870021/215522822-fd1f8541-907d-4f1f-9dba-c8cec299514e.png">
<img width="1008" alt="ansible_setup_timeout_collector" src="https://user-images.githubusercontent.com/1870021/215522828-a9e3ea51-0da3-4497-aee6-c58ce7f7c801.png">
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /home/dsg4269/.ansible.cfg
configured module search path = ['/home/dsg4269/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/dsg4269/dev/git/test_timeout_fact_mount/.venv/lib64/python3.11/site-packages/ansible
ansible collection location = /home/dsg4269/.ansible/collections:/usr/share/ansible/collections
executable location = /home/dsg4269/dev/git/test_timeout_fact_mount/.venv/bin/ansible
python version = 3.11.1 (main, Jan 6 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/home/dsg4269/dev/git/test_timeout_fact_mount/.venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Reproduce on RHEL 7.9 and WSL Fedora 37
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- name: Test setup
ansible.builtin.setup:
gather_subset:
- mounts
gather_timeout: 600
```
### Expected Results
I expected the gather_timeout to be taken in effect in the setup module of the subset mounts.
### Actual Results
```console
Facts have a timeout on system with a lot of filesystem:
{
"device": "/dev/mapper/VolGroup00-lv_varlog",
"fstype": "xfs",
"mount": "/var/log",
"note": "Timed out while attempting to get extra information.",
"options": "rw,seclabel,nodev,relatime,attr2,inode64,noquota"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79844
|
https://github.com/ansible/ansible/pull/79847
|
5e131a96c086eda58436429a417c8e7cf256602b
|
c1e19e4bddd13425aba45733372f2a676506256c
| 2023-01-30T15:49:46Z |
python
| 2023-04-04T15:02:41Z |
lib/ansible/module_utils/facts/hardware/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import errno
import glob
import json
import os
import re
import sys
import time
from multiprocessing import cpu_count
from multiprocessing.pool import ThreadPool
from ansible.module_utils._text import to_text
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines, get_mount_size
from ansible.module_utils.six import iteritems
# import this as a module to ensure we get the same module instance
from ansible.module_utils.facts import timeout
def get_partition_uuid(partname):
try:
uuids = os.listdir("/dev/disk/by-uuid")
except OSError:
return
for uuid in uuids:
dev = os.path.realpath("/dev/disk/by-uuid/" + uuid)
if dev == ("/dev/" + partname):
return uuid
return None
class LinuxHardware(Hardware):
"""
Linux-specific subclass of Hardware. Defines memory and CPU facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
In addition, it also defines number of DMI facts and device facts.
"""
platform = 'Linux'
# Originally only had these four as toplevelfacts
ORIGINAL_MEMORY_FACTS = frozenset(('MemTotal', 'SwapTotal', 'MemFree', 'SwapFree'))
# Now we have all of these in a dict structure
MEMORY_FACTS = ORIGINAL_MEMORY_FACTS.union(('Buffers', 'Cached', 'SwapCached'))
# regex used against findmnt output to detect bind mounts
BIND_MOUNT_RE = re.compile(r'.*\]')
# regex used against mtab content to find entries that are bind mounts
MTAB_BIND_MOUNT_RE = re.compile(r'.*bind.*"')
# regex used for replacing octal escape sequences
OCTAL_ESCAPE_RE = re.compile(r'\\[0-9]{3}')
def populate(self, collected_facts=None):
hardware_facts = {}
locale = get_best_parsable_locale(self.module)
self.module.run_command_environ_update = {'LANG': locale, 'LC_ALL': locale, 'LC_NUMERIC': locale}
cpu_facts = self.get_cpu_facts(collected_facts=collected_facts)
memory_facts = self.get_memory_facts()
dmi_facts = self.get_dmi_facts()
device_facts = self.get_device_facts()
uptime_facts = self.get_uptime_facts()
lvm_facts = self.get_lvm_facts()
mount_facts = {}
try:
mount_facts = self.get_mount_facts()
except timeout.TimeoutError:
self.module.warn("No mount facts were gathered due to timeout.")
hardware_facts.update(cpu_facts)
hardware_facts.update(memory_facts)
hardware_facts.update(dmi_facts)
hardware_facts.update(device_facts)
hardware_facts.update(uptime_facts)
hardware_facts.update(lvm_facts)
hardware_facts.update(mount_facts)
return hardware_facts
def get_memory_facts(self):
memory_facts = {}
if not os.access("/proc/meminfo", os.R_OK):
return memory_facts
memstats = {}
for line in get_file_lines("/proc/meminfo"):
data = line.split(":", 1)
key = data[0]
if key in self.ORIGINAL_MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memory_facts["%s_mb" % key.lower()] = int(val) // 1024
if key in self.MEMORY_FACTS:
val = data[1].strip().split(' ')[0]
memstats[key.lower()] = int(val) // 1024
if None not in (memstats.get('memtotal'), memstats.get('memfree')):
memstats['real:used'] = memstats['memtotal'] - memstats['memfree']
if None not in (memstats.get('cached'), memstats.get('memfree'), memstats.get('buffers')):
memstats['nocache:free'] = memstats['cached'] + memstats['memfree'] + memstats['buffers']
if None not in (memstats.get('memtotal'), memstats.get('nocache:free')):
memstats['nocache:used'] = memstats['memtotal'] - memstats['nocache:free']
if None not in (memstats.get('swaptotal'), memstats.get('swapfree')):
memstats['swap:used'] = memstats['swaptotal'] - memstats['swapfree']
memory_facts['memory_mb'] = {
'real': {
'total': memstats.get('memtotal'),
'used': memstats.get('real:used'),
'free': memstats.get('memfree'),
},
'nocache': {
'free': memstats.get('nocache:free'),
'used': memstats.get('nocache:used'),
},
'swap': {
'total': memstats.get('swaptotal'),
'free': memstats.get('swapfree'),
'used': memstats.get('swap:used'),
'cached': memstats.get('swapcached'),
},
}
return memory_facts
def get_cpu_facts(self, collected_facts=None):
cpu_facts = {}
collected_facts = collected_facts or {}
i = 0
vendor_id_occurrence = 0
model_name_occurrence = 0
processor_occurrence = 0
physid = 0
coreid = 0
sockets = {}
cores = {}
zp = 0
zmt = 0
xen = False
xen_paravirt = False
try:
if os.path.exists('/proc/xen'):
xen = True
else:
for line in get_file_lines('/sys/hypervisor/type'):
if line.strip() == 'xen':
xen = True
# Only interested in the first line
break
except IOError:
pass
if not os.access("/proc/cpuinfo", os.R_OK):
return cpu_facts
cpu_facts['processor'] = []
for line in get_file_lines('/proc/cpuinfo'):
data = line.split(":", 1)
key = data[0].strip()
try:
val = data[1].strip()
except IndexError:
val = ""
if xen:
if key == 'flags':
# Check for vme cpu flag, Xen paravirt does not expose this.
# Need to detect Xen paravirt because it exposes cpuinfo
# differently than Xen HVM or KVM and causes reporting of
# only a single cpu core.
if 'vme' not in val:
xen_paravirt = True
# model name is for Intel arch, Processor (mind the uppercase P)
# works for some ARM devices, like the Sheevaplug.
if key in ['model name', 'Processor', 'vendor_id', 'cpu', 'Vendor', 'processor']:
if 'processor' not in cpu_facts:
cpu_facts['processor'] = []
cpu_facts['processor'].append(val)
if key == 'vendor_id':
vendor_id_occurrence += 1
if key == 'model name':
model_name_occurrence += 1
if key == 'processor':
processor_occurrence += 1
i += 1
elif key == 'physical id':
physid = val
if physid not in sockets:
sockets[physid] = 1
elif key == 'core id':
coreid = val
if coreid not in sockets:
cores[coreid] = 1
elif key == 'cpu cores':
sockets[physid] = int(val)
elif key == 'siblings':
cores[coreid] = int(val)
# S390x classic cpuinfo
elif key == '# processors':
zp = int(val)
elif key == 'max thread id':
zmt = int(val) + 1
# SPARC
elif key == 'ncpus active':
i = int(val)
# Skip for platforms without vendor_id/model_name in cpuinfo (e.g ppc64le)
if vendor_id_occurrence > 0:
if vendor_id_occurrence == model_name_occurrence:
i = vendor_id_occurrence
# The fields for ARM CPUs do not always include 'vendor_id' or 'model name',
# and sometimes includes both 'processor' and 'Processor'.
# The fields for Power CPUs include 'processor' and 'cpu'.
# Always use 'processor' count for ARM and Power systems
if collected_facts.get('ansible_architecture', '').startswith(('armv', 'aarch', 'ppc')):
i = processor_occurrence
if collected_facts.get('ansible_architecture') == 's390x':
# getting sockets would require 5.7+ with CONFIG_SCHED_TOPOLOGY
cpu_facts['processor_count'] = 1
cpu_facts['processor_cores'] = zp // zmt
cpu_facts['processor_threads_per_core'] = zmt
cpu_facts['processor_vcpus'] = zp
cpu_facts['processor_nproc'] = zp
else:
if xen_paravirt:
cpu_facts['processor_count'] = i
cpu_facts['processor_cores'] = i
cpu_facts['processor_threads_per_core'] = 1
cpu_facts['processor_vcpus'] = i
cpu_facts['processor_nproc'] = i
else:
if sockets:
cpu_facts['processor_count'] = len(sockets)
else:
cpu_facts['processor_count'] = i
socket_values = list(sockets.values())
if socket_values and socket_values[0]:
cpu_facts['processor_cores'] = socket_values[0]
else:
cpu_facts['processor_cores'] = 1
core_values = list(cores.values())
if core_values:
cpu_facts['processor_threads_per_core'] = core_values[0] // cpu_facts['processor_cores']
else:
cpu_facts['processor_threads_per_core'] = 1 // cpu_facts['processor_cores']
cpu_facts['processor_vcpus'] = (cpu_facts['processor_threads_per_core'] *
cpu_facts['processor_count'] * cpu_facts['processor_cores'])
cpu_facts['processor_nproc'] = processor_occurrence
# if the number of processors available to the module's
# thread cannot be determined, the processor count
# reported by /proc will be the default (as previously defined)
try:
cpu_facts['processor_nproc'] = len(
os.sched_getaffinity(0)
)
except AttributeError:
# In Python < 3.3, os.sched_getaffinity() is not available
try:
cmd = get_bin_path('nproc')
except ValueError:
pass
else:
rc, out, _err = self.module.run_command(cmd)
if rc == 0:
cpu_facts['processor_nproc'] = int(out)
return cpu_facts
def get_dmi_facts(self):
''' learn dmi facts from system
Try /sys first for dmi related facts.
If that is not available, fall back to dmidecode executable '''
dmi_facts = {}
if os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
# Use kernel DMI info, if available
# DMI SPEC -- https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.2.0.pdf
FORM_FACTOR = ["Unknown", "Other", "Unknown", "Desktop",
"Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower",
"Portable", "Laptop", "Notebook", "Hand Held", "Docking Station",
"All In One", "Sub Notebook", "Space-saving", "Lunch Box",
"Main Server Chassis", "Expansion Chassis", "Sub Chassis",
"Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis",
"Rack Mount Chassis", "Sealed-case PC", "Multi-system",
"CompactPCI", "AdvancedTCA", "Blade", "Blade Enclosure",
"Tablet", "Convertible", "Detachable", "IoT Gateway",
"Embedded PC", "Mini PC", "Stick PC"]
DMI_DICT = {
'bios_date': '/sys/devices/virtual/dmi/id/bios_date',
'bios_vendor': '/sys/devices/virtual/dmi/id/bios_vendor',
'bios_version': '/sys/devices/virtual/dmi/id/bios_version',
'board_asset_tag': '/sys/devices/virtual/dmi/id/board_asset_tag',
'board_name': '/sys/devices/virtual/dmi/id/board_name',
'board_serial': '/sys/devices/virtual/dmi/id/board_serial',
'board_vendor': '/sys/devices/virtual/dmi/id/board_vendor',
'board_version': '/sys/devices/virtual/dmi/id/board_version',
'chassis_asset_tag': '/sys/devices/virtual/dmi/id/chassis_asset_tag',
'chassis_serial': '/sys/devices/virtual/dmi/id/chassis_serial',
'chassis_vendor': '/sys/devices/virtual/dmi/id/chassis_vendor',
'chassis_version': '/sys/devices/virtual/dmi/id/chassis_version',
'form_factor': '/sys/devices/virtual/dmi/id/chassis_type',
'product_name': '/sys/devices/virtual/dmi/id/product_name',
'product_serial': '/sys/devices/virtual/dmi/id/product_serial',
'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid',
'product_version': '/sys/devices/virtual/dmi/id/product_version',
'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor',
}
for (key, path) in DMI_DICT.items():
data = get_file_content(path)
if data is not None:
if key == 'form_factor':
try:
dmi_facts['form_factor'] = FORM_FACTOR[int(data)]
except IndexError:
dmi_facts['form_factor'] = 'unknown (%s)' % data
else:
dmi_facts[key] = data
else:
dmi_facts[key] = 'NA'
else:
# Fall back to using dmidecode, if available
dmi_bin = self.module.get_bin_path('dmidecode')
DMI_DICT = {
'bios_date': 'bios-release-date',
'bios_vendor': 'bios-vendor',
'bios_version': 'bios-version',
'board_asset_tag': 'baseboard-asset-tag',
'board_name': 'baseboard-product-name',
'board_serial': 'baseboard-serial-number',
'board_vendor': 'baseboard-manufacturer',
'board_version': 'baseboard-version',
'chassis_asset_tag': 'chassis-asset-tag',
'chassis_serial': 'chassis-serial-number',
'chassis_vendor': 'chassis-manufacturer',
'chassis_version': 'chassis-version',
'form_factor': 'chassis-type',
'product_name': 'system-product-name',
'product_serial': 'system-serial-number',
'product_uuid': 'system-uuid',
'product_version': 'system-version',
'system_vendor': 'system-manufacturer',
}
for (k, v) in DMI_DICT.items():
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s %s' % (dmi_bin, v))
if rc == 0:
# Strip out commented lines (specific dmidecode output)
thisvalue = ''.join([line for line in out.splitlines() if not line.startswith('#')])
try:
json.dumps(thisvalue)
except UnicodeDecodeError:
thisvalue = "NA"
dmi_facts[k] = thisvalue
else:
dmi_facts[k] = 'NA'
else:
dmi_facts[k] = 'NA'
return dmi_facts
def _run_lsblk(self, lsblk_path):
# call lsblk and collect all uuids
# --exclude 2 makes lsblk ignore floppy disks, which are slower to answer than typical timeouts
# this uses the linux major device number
# for details see https://www.kernel.org/doc/Documentation/devices.txt
args = ['--list', '--noheadings', '--paths', '--output', 'NAME,UUID', '--exclude', '2']
cmd = [lsblk_path] + args
rc, out, err = self.module.run_command(cmd)
return rc, out, err
def _lsblk_uuid(self):
uuids = {}
lsblk_path = self.module.get_bin_path("lsblk")
if not lsblk_path:
return uuids
rc, out, err = self._run_lsblk(lsblk_path)
if rc != 0:
return uuids
# each line will be in format:
# <devicename><some whitespace><uuid>
# /dev/sda1 32caaec3-ef40-4691-a3b6-438c3f9bc1c0
for lsblk_line in out.splitlines():
if not lsblk_line:
continue
line = lsblk_line.strip()
fields = line.rsplit(None, 1)
if len(fields) < 2:
continue
device_name, uuid = fields[0].strip(), fields[1].strip()
if device_name in uuids:
continue
uuids[device_name] = uuid
return uuids
def _udevadm_uuid(self, device):
# fallback for versions of lsblk <= 2.23 that don't have --paths, see _run_lsblk() above
uuid = 'N/A'
udevadm_path = self.module.get_bin_path('udevadm')
if not udevadm_path:
return uuid
cmd = [udevadm_path, 'info', '--query', 'property', '--name', device]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
return uuid
# a snippet of the output of the udevadm command below will be:
# ...
# ID_FS_TYPE=ext4
# ID_FS_USAGE=filesystem
# ID_FS_UUID=57b1a3e7-9019-4747-9809-7ec52bba9179
# ...
m = re.search('ID_FS_UUID=(.*)\n', out)
if m:
uuid = m.group(1)
return uuid
def _run_findmnt(self, findmnt_path):
args = ['--list', '--noheadings', '--notruncate']
cmd = [findmnt_path] + args
rc, out, err = self.module.run_command(cmd, errors='surrogate_then_replace')
return rc, out, err
def _find_bind_mounts(self):
bind_mounts = set()
findmnt_path = self.module.get_bin_path("findmnt")
if not findmnt_path:
return bind_mounts
rc, out, err = self._run_findmnt(findmnt_path)
if rc != 0:
return bind_mounts
# find bind mounts, in case /etc/mtab is a symlink to /proc/mounts
for line in out.splitlines():
fields = line.split()
# fields[0] is the TARGET, fields[1] is the SOURCE
if len(fields) < 2:
continue
# bind mounts will have a [/directory_name] in the SOURCE column
if self.BIND_MOUNT_RE.match(fields[1]):
bind_mounts.add(fields[0])
return bind_mounts
def _mtab_entries(self):
mtab_file = '/etc/mtab'
if not os.path.exists(mtab_file):
mtab_file = '/proc/mounts'
mtab = get_file_content(mtab_file, '')
mtab_entries = []
for line in mtab.splitlines():
fields = line.split()
if len(fields) < 4:
continue
mtab_entries.append(fields)
return mtab_entries
@staticmethod
def _replace_octal_escapes_helper(match):
# Convert to integer using base8 and then convert to character
return chr(int(match.group()[1:], 8))
def _replace_octal_escapes(self, value):
return self.OCTAL_ESCAPE_RE.sub(self._replace_octal_escapes_helper, value)
def get_mount_info(self, mount, device, uuids):
mount_size = get_mount_size(mount)
# _udevadm_uuid is a fallback for versions of lsblk <= 2.23 that don't have --paths
# see _run_lsblk() above
# https://github.com/ansible/ansible/issues/36077
uuid = uuids.get(device, self._udevadm_uuid(device))
return mount_size, uuid
def get_mount_facts(self):
mounts = []
# gather system lists
bind_mounts = self._find_bind_mounts()
uuids = self._lsblk_uuid()
mtab_entries = self._mtab_entries()
# start threads to query each mount
results = {}
pool = ThreadPool(processes=min(len(mtab_entries), cpu_count()))
maxtime = globals().get('GATHER_TIMEOUT') or timeout.DEFAULT_GATHER_TIMEOUT
for fields in mtab_entries:
# Transform octal escape sequences
fields = [self._replace_octal_escapes(field) for field in fields]
device, mount, fstype, options = fields[0], fields[1], fields[2], fields[3]
if not device.startswith(('/', '\\')) and ':/' not in device or fstype == 'none':
continue
mount_info = {'mount': mount,
'device': device,
'fstype': fstype,
'options': options}
if mount in bind_mounts:
# only add if not already there, we might have a plain /etc/mtab
if not self.MTAB_BIND_MOUNT_RE.match(options):
mount_info['options'] += ",bind"
results[mount] = {'info': mount_info,
'extra': pool.apply_async(self.get_mount_info, (mount, device, uuids)),
'timelimit': time.time() + maxtime}
pool.close() # done with new workers, start gc
# wait for workers and get results
while results:
for mount in list(results):
done = False
res = results[mount]['extra']
try:
if res.ready():
done = True
if res.successful():
mount_size, uuid = res.get()
if mount_size:
results[mount]['info'].update(mount_size)
results[mount]['info']['uuid'] = uuid or 'N/A'
else:
# failed, try to find out why, if 'res.successful' we know there are no exceptions
results[mount]['info']['note'] = 'Could not get extra information: %s.' % (to_text(res.get()))
elif time.time() > results[mount]['timelimit']:
done = True
self.module.warn("Timeout exceeded when getting mount info for %s" % mount)
results[mount]['info']['note'] = 'Could not get extra information due to timeout'
except Exception as e:
import traceback
done = True
results[mount]['info'] = 'N/A'
self.module.warn("Error prevented getting extra info for mount %s: [%s] %s." % (mount, type(e), to_text(e)))
self.module.debug(traceback.format_exc())
if done:
# move results outside and make loop only handle pending
mounts.append(results[mount]['info'])
del results[mount]
# avoid cpu churn, sleep between retrying for loop with remaining mounts
time.sleep(0.1)
return {'mounts': mounts}
def get_device_links(self, link_dir):
if not os.path.exists(link_dir):
return {}
try:
retval = collections.defaultdict(set)
for entry in os.listdir(link_dir):
try:
target = os.path.basename(os.readlink(os.path.join(link_dir, entry)))
retval[target].add(entry)
except OSError:
continue
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_owners(self):
try:
retval = collections.defaultdict(set)
for path in glob.glob('/sys/block/*/slaves/*'):
elements = path.split('/')
device = elements[3]
target = elements[5]
retval[target].add(device)
return dict((k, list(sorted(v))) for (k, v) in iteritems(retval))
except OSError:
return {}
def get_all_device_links(self):
return {
'ids': self.get_device_links('/dev/disk/by-id'),
'uuids': self.get_device_links('/dev/disk/by-uuid'),
'labels': self.get_device_links('/dev/disk/by-label'),
'masters': self.get_all_device_owners(),
}
def get_holders(self, block_dev_dict, sysdir):
block_dev_dict['holders'] = []
if os.path.isdir(sysdir + "/holders"):
for folder in os.listdir(sysdir + "/holders"):
if not folder.startswith("dm-"):
continue
name = get_file_content(sysdir + "/holders/" + folder + "/dm/name")
if name:
block_dev_dict['holders'].append(name)
else:
block_dev_dict['holders'].append(folder)
def _get_sg_inq_serial(self, sg_inq, block):
device = "/dev/%s" % (block)
rc, drivedata, err = self.module.run_command([sg_inq, device])
if rc == 0:
serial = re.search(r"(?:Unit serial|Serial) number:\s+(\w+)", drivedata)
if serial:
return serial.group(1)
def get_device_facts(self):
device_facts = {}
device_facts['devices'] = {}
lspci = self.module.get_bin_path('lspci')
if lspci:
rc, pcidata, err = self.module.run_command([lspci, '-D'], errors='surrogate_then_replace')
else:
pcidata = None
try:
block_devs = os.listdir("/sys/block")
except OSError:
return device_facts
devs_wwn = {}
try:
devs_by_id = os.listdir("/dev/disk/by-id")
except OSError:
pass
else:
for link_name in devs_by_id:
if link_name.startswith("wwn-"):
try:
wwn_link = os.readlink(os.path.join("/dev/disk/by-id", link_name))
except OSError:
continue
devs_wwn[os.path.basename(wwn_link)] = link_name[4:]
links = self.get_all_device_links()
device_facts['device_links'] = links
for block in block_devs:
virtual = 1
sysfs_no_links = 0
try:
path = os.readlink(os.path.join("/sys/block/", block))
except OSError:
e = sys.exc_info()[1]
if e.errno == errno.EINVAL:
path = block
sysfs_no_links = 1
else:
continue
sysdir = os.path.join("/sys/block", path)
if sysfs_no_links == 1:
for folder in os.listdir(sysdir):
if "device" in folder:
virtual = 0
break
d = {}
d['virtual'] = virtual
d['links'] = {}
for (link_type, link_values) in iteritems(links):
d['links'][link_type] = link_values.get(block, [])
diskname = os.path.basename(sysdir)
for key in ['vendor', 'model', 'sas_address', 'sas_device_handle']:
d[key] = get_file_content(sysdir + "/device/" + key)
sg_inq = self.module.get_bin_path('sg_inq')
# we can get NVMe device's serial number from /sys/block/<name>/device/serial
serial_path = "/sys/block/%s/device/serial" % (block)
if sg_inq:
serial = self._get_sg_inq_serial(sg_inq, block)
if serial:
d['serial'] = serial
else:
serial = get_file_content(serial_path)
if serial:
d['serial'] = serial
for key, test in [('removable', '/removable'),
('support_discard', '/queue/discard_granularity'),
]:
d[key] = get_file_content(sysdir + test)
if diskname in devs_wwn:
d['wwn'] = devs_wwn[diskname]
d['partitions'] = {}
for folder in os.listdir(sysdir):
m = re.search("(" + diskname + r"[p]?\d+)", folder)
if m:
part = {}
partname = m.group(1)
part_sysdir = sysdir + "/" + partname
part['links'] = {}
for (link_type, link_values) in iteritems(links):
part['links'][link_type] = link_values.get(partname, [])
part['start'] = get_file_content(part_sysdir + "/start", 0)
part['sectors'] = get_file_content(part_sysdir + "/size", 0)
part['sectorsize'] = get_file_content(part_sysdir + "/queue/logical_block_size")
if not part['sectorsize']:
part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size", 512)
part['size'] = bytes_to_human((float(part['sectors']) * 512.0))
part['uuid'] = get_partition_uuid(partname)
self.get_holders(part, part_sysdir)
d['partitions'][partname] = part
d['rotational'] = get_file_content(sysdir + "/queue/rotational")
d['scheduler_mode'] = ""
scheduler = get_file_content(sysdir + "/queue/scheduler")
if scheduler is not None:
m = re.match(r".*?(\[(.*)\])", scheduler)
if m:
d['scheduler_mode'] = m.group(2)
d['sectors'] = get_file_content(sysdir + "/size")
if not d['sectors']:
d['sectors'] = 0
d['sectorsize'] = get_file_content(sysdir + "/queue/logical_block_size")
if not d['sectorsize']:
d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size", 512)
d['size'] = bytes_to_human(float(d['sectors']) * 512.0)
d['host'] = ""
# domains are numbered (0 to ffff), bus (0 to ff), slot (0 to 1f), and function (0 to 7).
m = re.match(r".+/([a-f0-9]{4}:[a-f0-9]{2}:[0|1][a-f0-9]\.[0-7])/", sysdir)
if m and pcidata:
pciid = m.group(1)
did = re.escape(pciid)
m = re.search("^" + did + r"\s(.*)$", pcidata, re.MULTILINE)
if m:
d['host'] = m.group(1)
self.get_holders(d, sysdir)
device_facts['devices'][diskname] = d
return device_facts
def get_uptime_facts(self):
uptime_facts = {}
uptime_file_content = get_file_content('/proc/uptime')
if uptime_file_content:
uptime_seconds_string = uptime_file_content.split(' ')[0]
uptime_facts['uptime_seconds'] = int(float(uptime_seconds_string))
return uptime_facts
def _find_mapper_device_name(self, dm_device):
dm_prefix = '/dev/dm-'
mapper_device = dm_device
if dm_device.startswith(dm_prefix):
dmsetup_cmd = self.module.get_bin_path('dmsetup', True)
mapper_prefix = '/dev/mapper/'
rc, dm_name, err = self.module.run_command("%s info -C --noheadings -o name %s" % (dmsetup_cmd, dm_device))
if rc == 0:
mapper_device = mapper_prefix + dm_name.rstrip()
return mapper_device
def get_lvm_facts(self):
""" Get LVM Facts if running as root and lvm utils are available """
lvm_facts = {'lvm': 'N/A'}
if os.getuid() == 0 and self.module.get_bin_path('vgs'):
lvm_util_options = '--noheadings --nosuffix --units g --separator ,'
vgs_path = self.module.get_bin_path('vgs')
# vgs fields: VG #PV #LV #SN Attr VSize VFree
vgs = {}
if vgs_path:
rc, vg_lines, err = self.module.run_command('%s %s' % (vgs_path, lvm_util_options))
for vg_line in vg_lines.splitlines():
items = vg_line.strip().split(',')
vgs[items[0]] = {'size_g': items[-2],
'free_g': items[-1],
'num_lvs': items[2],
'num_pvs': items[1]}
lvs_path = self.module.get_bin_path('lvs')
# lvs fields:
# LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvs = {}
if lvs_path:
rc, lv_lines, err = self.module.run_command('%s %s' % (lvs_path, lvm_util_options))
for lv_line in lv_lines.splitlines():
items = lv_line.strip().split(',')
lvs[items[0]] = {'size_g': items[3], 'vg': items[1]}
pvs_path = self.module.get_bin_path('pvs')
# pvs fields: PV VG #Fmt #Attr PSize PFree
pvs = {}
if pvs_path:
rc, pv_lines, err = self.module.run_command('%s %s' % (pvs_path, lvm_util_options))
for pv_line in pv_lines.splitlines():
items = pv_line.strip().split(',')
pvs[self._find_mapper_device_name(items[0])] = {
'size_g': items[4],
'free_g': items[5],
'vg': items[1]}
lvm_facts['lvm'] = {'lvs': lvs, 'vgs': vgs, 'pvs': pvs}
return lvm_facts
class LinuxHardwareCollector(HardwareCollector):
_platform = 'Linux'
_fact_class = LinuxHardware
required_facts = set(['platform'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,054 |
Semantic markup for module documentation
|
### Summary
This issue tracks the various discussions and PRs related to semantic markup for module documentation. The topic has been under discussion for six months and is on the DaWGs [agenda for 22 June 2021](https://github.com/ansible/community/issues/579#issuecomment-861720413).
Semantic markup would control the look of standard documentation elements with macros that refer to the names of those elements (for example, O(.) for options and V(.) for values) instead of macros that refer to the formatting directly (for example, C(.) for `code` or B(.) for **bold**).
We can implement semantic markup in our module documentation with support for legacy formatting. However, moving to semantic markup would ideally mean updating all existing documentation to use the new standard. Moving forward, semantic markup would be easier to remember and use correctly. It would also allow us to change the look of those elements by changing the publication process without changing the documentation strings in the module code (again).
Work involved includes:
- [x] agreeing on the macro names/syntax (for example, is it `O(.)` for options or `P(.)` for parameters?, and so on)
- [x] agreeing on the desired formatting/output for each macro (should options be displayed in italics, in bold, in code, and so on)
- [x] updating the publication pipeline to support semantic markup macros
- [x] documenting the new system
- [ ] updating the existing docs to use semantic markup macros
### Current related PRs:
Antsibull code update: https://github.com/ansible-community/antsibull/pull/281
ansible-doc code update: https://github.com/ansible/ansible/pull/74937
Documentation update: https://github.com/ansible/ansible/pull/73137
Collection docs update: https://github.com/ansible-collections/community.dns/pull/23
### History:
- The topic originated in a [comment on a PR](https://github.com/ansible/ansible/pull/72737#pullrequestreview-542046601) from early December, 2020
- [Set of options](https://github.com/ansible/community/issues/521#issuecomment-739263018) from the DaWGs agenda 2020
- Discussion in the [DaWGs meeting on 15 December 2020](https://meetbot.fedoraproject.org/ansible-docs/2020-12-15/dawgs_aka_docs_working_group.2020-12-15-16.01.log.html) (topic starts at `16:32:16`) ended without a vote
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_documenting.html#linking-and-other-format-macros-within-module-documentation
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75054
|
https://github.com/ansible/ansible/pull/80240
|
92c694372bd3b3f68644b27cae51270259c04e56
|
4029da9a9f64dd7511575d0c61f64cbbc21eed71
| 2021-06-18T16:16:11Z |
python
| 2023-04-06T15:46:03Z |
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
|
.. _developing_modules_documenting:
.. _module_documenting:
*******************************
Module format and documentation
*******************************
If you want to contribute your module to most Ansible collections, you must write your module in Python and follow the standard format described below. (Unless you're writing a Windows module, in which case the :ref:`Windows guidelines <developing_modules_general_windows>` apply.) In addition to following this format, you should review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
Every Ansible module written in Python must begin with seven standard sections in a particular order, followed by the code. The sections in order are:
.. contents::
:depth: 1
:local:
.. note:: Why don't the imports go first?
Keen Python programmers may notice that contrary to PEP 8's advice we don't put ``imports`` at the top of the file. This is because the ``DOCUMENTATION`` through ``RETURN`` sections are not used by the module code itself; they are essentially extra docstrings for the file. The imports are placed after these special variables for the same reason as PEP 8 puts the imports after the introductory comments and docstrings. This keeps the active parts of the code together and the pieces which are purely informational apart. The decision to exclude E402 is based on readability (which is what PEP 8 is about). Documentation strings in a module are much more similar to module level docstrings, than code, and are never utilized by the module itself. Placing the imports below this documentation and closer to the code, consolidates and groups all related code in a congruent manner to improve readability, debugging and understanding.
.. warning:: **Copy old modules with care!**
Some older Ansible modules have ``imports`` at the bottom of the file, ``Copyright`` notices with the full GPL prefix, and/or ``DOCUMENTATION`` fields in the wrong order. These are legacy files that need updating - do not copy them into new modules. Over time we are updating and correcting older modules. Please follow the guidelines on this page!
.. note:: For non-Python modules you still create a ``.py`` file for documentation purposes. Starting at ansible-core 2.14 you can instead choose to create a ``.yml`` file that has the same data structure, but in pure YAML.
With YAML files, the examples below are easy to use by removing Python quoting and substituting ``=`` for ``:``, for example ``DOCUMENTATION = r''' ... '''` ` to ``DOCUMENTATION: ...`` and removing closing quotes. :ref:`adjacent_yaml_doc`
.. _shebang:
Python shebang & UTF-8 coding
===============================
Begin your Ansible module with ``#!/usr/bin/python`` - this "shebang" allows ``ansible_python_interpreter`` to work. Follow the shebang immediately with ``# -*- coding: utf-8 -*-`` to clarify that the file is UTF-8 encoded.
.. note:: Using ``#!/usr/bin/env``, makes ``env`` the interpreter and bypasses ``ansible_<interpreter>_interpreter`` logic.
.. note:: If you develop the module using a different scripting language, adjust the interpreter accordingly (``#!/usr/bin/<interpreter>``) so ``ansible_<interpreter>_interpreter`` can work for that specific language.
.. note:: Binary modules do not require a shebang or an interpreter.
.. _copyright:
Copyright and license
=====================
After the shebang and UTF-8 coding, add a `copyright line <https://www.linuxfoundation.org/blog/copyright-notices-in-open-source-software-projects/>`_ with the original copyright holder and a license declaration. The license declaration should be ONLY one line, not the full GPL prefix.:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Contributors to the Ansible project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
Additions to the module (for instance, rewrites) are not permitted to add additional copyright lines other than the default copyright statement if missing:
.. code-block:: python
# Copyright: Contributors to the Ansible project
Any legal review will include the source control history, so an exhaustive copyright header is not necessary.
Please do not include a copyright year. If the existing copyright statement includes a year, do not edit the existing copyright year. Any existing copyright header should not be modified without permission from the copyright author.
.. _ansible_metadata_block:
ANSIBLE_METADATA block
======================
Since we moved to collections we have deprecated the METADATA functionality, it is no longer required for modules, but it will not break anything if present.
.. _documentation_block:
DOCUMENTATION block
===================
After the shebang, the UTF-8 coding, the copyright line, and the license section comes the ``DOCUMENTATION`` block. Ansible's online module documentation is generated from the ``DOCUMENTATION`` blocks in each module's source code. The ``DOCUMENTATION`` block must be valid YAML. You may find it easier to start writing your ``DOCUMENTATION`` string in an :ref:`editor with YAML syntax highlighting <other_tools_and_programs>` before you include it in your Python file. You can start by copying our `example documentation string <https://github.com/ansible/ansible/blob/devel/examples/DOCUMENTATION.yml>`_ into your module file and modifying it. If you run into syntax issues in your YAML, you can validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
Module documentation should briefly and accurately define what each module and option does, and how it works with others in the underlying system. Documentation should be written for broad audience--readable both by experts and non-experts.
* Descriptions should always start with a capital letter and end with a full stop. Consistency always helps.
* Verify that arguments in doc and module spec dict are identical.
* For password / secret arguments ``no_log=True`` should be set.
* For arguments that seem to contain sensitive information but **do not** contain secrets, such as "password_length", set ``no_log=False`` to disable the warning message.
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* If your module allows ``check_mode``, reflect this fact in the documentation.
To create clear, concise, consistent, and useful documentation, follow the :ref:`style guide <style_guide>`.
Each documentation field is described below. Before committing your module documentation, please test it at the command line and as HTML:
* As long as your module file is :ref:`available locally <local_modules>`, you can use ``ansible-doc -t module my_module_name`` to view your module documentation at the command line. Any parsing errors will be obvious - you can view details by adding ``-vvv`` to the command.
* You should also :ref:`test the HTML output <testing_module_documentation>` of your module documentation.
Documentation fields
--------------------
All fields in the ``DOCUMENTATION`` block are lower-case. All fields are required unless specified otherwise:
:module:
* The name of the module.
* Must be the same as the filename, without the ``.py`` extension.
:short_description:
* A short description which is displayed on the :ref:`list_of_collections` page and ``ansible-doc -l``.
* The ``short_description`` is displayed by ``ansible-doc -l`` without any category grouping,
so it needs enough detail to explain the module's purpose without the context of the directory structure in which it lives.
* Unlike ``description:``, ``short_description`` should not have a trailing period/full stop.
:description:
* A detailed description (generally two or more sentences).
* Must be written in full sentences, in other words, with capital letters and periods/full stops.
* Shouldn't mention the module name.
* Make use of multiple entries rather than using one long paragraph.
* Don't quote complete values unless it is required by YAML.
:version_added:
* The version of Ansible when the module was added.
* This is a string, and not a float, for example, ``version_added: '2.1'``.
* In collections, this must be the collection version the module was added to, not the Ansible version. For example, ``version_added: 1.0.0``.
:author:
* Name of the module author in the form ``First Last (@GitHubID)``.
* Use a multi-line list if there is more than one author.
* Don't use quotes as it should not be required by YAML.
:deprecated:
* Marks modules that will be removed in future releases. See also :ref:`module_lifecycle`.
:options:
* Options are often called `parameters` or `arguments`. Because the documentation field is called `options`, we will use that term.
* If the module has no options (for example, it's a ``_facts`` module), all you need is one line: ``options: {}``.
* If your module has options (in other words, accepts arguments), each option should be documented thoroughly. For each module option, include:
:option-name:
* Declarative operation (not CRUD), to focus on the final state, for example `online:`, rather than `is_online:`.
* The name of the option should be consistent with the rest of the module, as well as other modules in the same category.
* When in doubt, look for other modules to find option names that are used for the same purpose, we like to offer consistency to our users.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
* The first entry is a description of the option itself; subsequent entries detail its use, dependencies, or format of possible values.
* Should not list the possible values (that's what ``choices:`` is for, though it should explain what the values do if they aren't obvious).
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* Mutually exclusive options must be documented as the final sentence on each of the options.
:required:
* Only needed if ``true``.
* If missing, we assume the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible
(such as ``true``/``false`` or ``yes``/``no``). Document booleans as ``true``/``false`` for consistency and compatibility with ansible-lint.
:choices:
* List of option values.
* Should be absent if empty.
:type:
* Specifies the data type that option accepts, must match the ``argspec``.
* If an argument is ``type='bool'``, this field should be set to ``type: bool`` and no ``choices`` should be specified.
* If an argument is ``type='list'``, ``elements`` should be specified.
:elements:
* Specifies the data type for list elements in case ``type='list'``.
:aliases:
* List of optional name aliases.
* Generally not needed.
:version_added:
* Only needed if this option was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field.
* This is a string, and not a float, for example, ``version_added: '2.3'``.
* In collections, this must be the collection version the option was added to, not the Ansible version. For example, ``version_added: 1.0.0``.
:suboptions:
* If this option takes a dict or list of dicts, you can define the structure here.
* See :ref:`ansible_collections.azure.azcollection.azure_rm_securitygroup_module`, :ref:`ansible_collections.azure.azcollection.azure_rm_azurefirewall_module`, and :ref:`ansible_collections.openstack.cloud.baremetal_node_action_module` for examples.
:requirements:
* List of requirements (if applicable).
* Include minimum versions.
:seealso:
* A list of references to other modules, documentation or Internet resources
* In Ansible 2.10 and later, references to modules must use the FQCN or ``ansible.builtin`` for modules in ``ansible-core``.
* A reference can be one of the following formats:
.. code-block:: yaml+jinja
seealso:
# Reference by module name
- module: cisco.aci.aci_tenant
# Reference by module name, including description
- module: cisco.aci.aci_tenant
description: ACI module to create tenants on a Cisco ACI fabric.
# Reference by rST documentation anchor
- ref: aci_guide
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by rST documentation anchor (with custom title)
- ref: The official Ansible ACI guide <aci_guide>
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by Internet resource
- name: APIC Management Information Model reference
description: Complete reference of the APIC object model.
link: https://developer.cisco.com/docs/apic-mim-ref/
* If you use ``ref:`` to link to an anchor that is not associated with a title, you must add a title to the ref for the link to work correctly.
* You can link to non-module plugins with ``ref:`` using the rST anchor, but plugin and module anchors are never associated with a title, so you must supply a title when you link to them. For example ``ref: namespace.collection.plugin_name lookup plugin <ansible_collections.namespace.collection.plugin_name_lookup>``.
:notes:
* Details of any important information that doesn't fit in one of the above sections.
* For example, whether ``check_mode`` is or is not supported.
.. _module_documents_linking:
Linking and other format macros within module documentation
-----------------------------------------------------------
You can link from your module documentation to other module docs, other resources on docs.ansible.com, and resources elsewhere on the internet with the help of some pre-defined macros. The correct formats for these macros are:
* ``L()`` for links with a heading. For example: ``See L(Ansible Automation Platform,https://www.ansible.com/products/automation-platform).`` As of Ansible 2.10, do not use ``L()`` for relative links between Ansible documentation and collection documentation.
* ``U()`` for URLs. For example: ``See U(https://www.ansible.com/products/automation-platform) for an overview.``
* ``R()`` for cross-references with a heading (added in Ansible 2.10). For example: ``See R(Cisco IOS Platform Guide,ios_platform_options)``. Use the RST anchor for the cross-reference. See :ref:`adding_anchors_rst` for details.
* ``M()`` for module names. For example: ``See also M(ansible.builtin.yum) or M(community.general.apt_rpm)``.
There are also some macros which do not create links but we use them to display certain types of
content in a uniform way:
* ``I()`` for option names. For example: ``Required if I(state=present).`` This is italicized in
the documentation.
* ``C()`` for files, option values, and inline code. For example: ``If not set the environment variable C(ACME_PASSWORD) will be used.`` or ``Use C(var | foo.bar.my_filter) to transform C(var) into the required format.`` This displays with a mono-space font in the documentation.
* ``B()`` currently has no standardized usage. It is displayed in boldface in the documentation.
* ``HORIZONTALLINE`` is used sparingly as a separator in long descriptions. It becomes a horizontal rule (the ``<hr>`` html tag) in the documentation.
.. note::
For links between modules and documentation within a collection, you can use any of the options above. For links outside of your collection, use ``R()`` if available. Otherwise, use ``U()`` or ``L()`` with full URLs (not relative links). For modules, use ``M()`` with the FQCN or ``ansible.builtin`` as shown in the example. If you are creating your own documentation site, you will need to use the `intersphinx extension <https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html>`_ to convert ``R()`` and ``M()`` to the correct links.
.. note::
To refer to a group of modules in a collection, use ``R()``. When a collection is not the right granularity, use ``C(..)``:
- ``Refer to the R(kubernetes.core collection, plugins_in_kubernetes.core) for information on managing kubernetes clusters.``
- ``The C(win_*) modules (spread across several collections) allow you to manage various aspects of windows hosts.``
.. note::
Because it stands out better, use ``seealso`` for general references over the use of notes or adding links to the description.
.. _module_docs_fragments:
Documentation fragments
-----------------------
If you are writing multiple related modules, they may share common documentation, such as authentication details, file mode settings, ``notes:`` or ``seealso:`` entries. Rather than duplicate that information in each module's ``DOCUMENTATION`` block, you can save it once as a doc_fragment plugin and use it in each module's documentation. In Ansible, shared documentation fragments are contained in a ``ModuleDocFragment`` class in `lib/ansible/plugins/doc_fragments/ <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/doc_fragments>`_ or the equivalent directory in a collection. To include a documentation fragment, add ``extends_documentation_fragment: FRAGMENT_NAME`` in your module documentation. Use the fully qualified collection name for the FRAGMENT_NAME (for example, ``kubernetes.core.k8s_auth_options``).
Modules should only use items from a doc fragment if the module will implement all of the interface documented there in a manner that behaves the same as the existing modules which import that fragment. The goal is that items imported from the doc fragment will behave identically when used in another module that imports the doc fragment.
By default, only the ``DOCUMENTATION`` property from a doc fragment is inserted into the module documentation. It is possible to define additional properties in the doc fragment in order to import only certain parts of a doc fragment or mix and match as appropriate. If a property is defined in both the doc fragment and the module, the module value overrides the doc fragment.
Here is an example doc fragment named ``example_fragment.py``:
.. code-block:: python
class ModuleDocFragment(object):
# Standard documentation
DOCUMENTATION = r'''
options:
# options here
'''
# Additional section
OTHER = r'''
options:
# other options here
'''
To insert the contents of ``OTHER`` in a module:
.. code-block:: yaml+jinja
extends_documentation_fragment: example_fragment.other
Or use both :
.. code-block:: yaml+jinja
extends_documentation_fragment:
- example_fragment
- example_fragment.other
.. _note:
* Prior to Ansible 2.8, documentation fragments were kept in ``lib/ansible/utils/module_docs_fragments``.
.. versionadded:: 2.8
Since Ansible 2.8, you can have user-supplied doc_fragments by using a ``doc_fragments`` directory adjacent to play or role, just like any other plugin.
For example, all AWS modules should include:
.. code-block:: yaml+jinja
extends_documentation_fragment:
- aws
- ec2
:ref:`docfragments_collections` describes how to incorporate documentation fragments in a collection.
.. _examples_block:
EXAMPLES block
==============
After the shebang, the UTF-8 coding, the copyright line, the license section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module.
Per playbook best practices, each example should include a ``name:`` line:
.. code-block:: text
EXAMPLES = r'''
- name: Ensure foo is installed
namespace.collection.modulename:
name: foo
state: present
'''
The ``name:`` line should be capitalized and not include a trailing dot.
Use a fully qualified collection name (FQCN) as a part of the module's name like in the example above. For modules in ``ansible-core``, use the ``ansible.builtin.`` identifier, for example ``ansible.builtin.debug``.
If your examples use boolean options, use yes/no values. Since the documentation generates boolean values as yes/no, having the examples use these values as well makes the module documentation more consistent.
If your module returns facts that are often needed, an example of how to use them can be helpful.
.. _return_block:
RETURN block
============
After the shebang, the UTF-8 coding, the copyright line, the license section, ``DOCUMENTATION`` and ``EXAMPLES`` blocks comes the ``RETURN`` block. This section documents the information the module returns for use by other modules.
If your module doesn't return anything (apart from the standard returns), this section of your module should read: ``RETURN = r''' # '''``
Otherwise, for each value returned, provide the following fields. All fields are required unless specified otherwise.
:return name:
Name of the returned field.
:description:
Detailed description of what this value represents. Capitalized and with trailing dot.
:returned:
When this value is returned, such as ``always``, ``changed`` or ``success``. This is a string and can contain any human-readable content.
:type:
Data type.
:elements:
If ``type='list'``, specifies the data type of the list's elements.
:sample:
One or more examples.
:version_added:
Only needed if this return was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field.
This is a string, and not a float, for example, ``version_added: '2.3'``.
:contains:
Optional. To describe nested return values, set ``type: dict``, or ``type: list``/``elements: dict``, or if you really have to, ``type: complex``, and repeat the elements above for each sub-field.
Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field:
.. code-block:: text
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
'''
RETURN = r'''
packages:
description: Information about package requirements.
returned: success
type: dict
contains:
missing:
description: Packages that are missing from the system.
returned: success
type: list
elements: str
sample:
- libmysqlclient-dev
- libxml2-dev
badversion:
description: Packages that are installed but at bad versions.
returned: success
type: list
elements: dict
sample:
- package: libxml2-dev
version: 2.9.4+dfsg1-2
constraint: ">= 3.0"
'''
.. _python_imports:
Python imports
==============
After the shebang, the UTF-8 coding, the copyright line, the license, and the sections for ``DOCUMENTATION``, ``EXAMPLES``, and ``RETURN``, you can finally add the python imports. All modules must use Python imports in the form:
.. code-block:: python
from module_utils.basic import AnsibleModule
The use of "wildcard" imports such as ``from module_utils.basic import *`` is no longer allowed.
.. _dev_testing_module_documentation:
Testing module documentation
============================
To test Ansible documentation locally please :ref:`follow instruction<testing_module_documentation>`. To test documentation in collections, please see :ref:`build_collection_docsite`.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,715 |
ansible.builtin.file: incorrect permissions being set when looping
|
### Summary
Setting file permissions for a list of files like this
```
my_items:
- path: dir1
mode: 0770
- ... further entries ..
```
using `ansible.builtin.file` results in incorrect permissions. For details see steps to reproduce below.
### Issue Type
Documentation Report
### Component Name
file
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /home/yannik/projects/xxx/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]
jinja version = 3.0.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 35
### Steps to Reproduce
Create a host_vars file with this:
```yaml
my_items:
- path: dir1
mode: 0770
```
and run the following playbook:
```yaml (paste below)
- file:
path: "/tmp/{{ item.path }}"
state: directory
mode: "{{ item.mode }}"
loop: "{{ my_items }}"
```
### Expected Results
The resulting permissions of `/tmp/dir1` should be equal to the permissions being set when not using a loop:
### Specifying `mode` directly in the playbook task
```
- file:
path: "/tmp/dir1"
state: directory
mode: 0770
```
**Result: permissions set to 0770**
### Specifying `mode` using a variable
`hosts_vars`:
```
my_permissions_variable: 0770
```
playbook:
```
- file:
path: "/tmp/dir"
state: directory
mode: "{{ my_permissions_variable }}"
```
**Result: permissions set to 0770**
### Actual Results
```console
File permissions are set to `0504` instead of `0770` when using a loop:
$ ansible-playbook test.yml -v
Using /home/yannik/projects/xxx/ansible.cfg as config file
PLAY [app0] *************************************************************************************************
TASK [Gathering Facts] **************************************************************************************
ok: [app0]
TASK [file] *************************************************************************************************
ok: [app0] => (item={'path': 'dir1', 'mode': 504}) => changed=false
ansible_loop_var: item
gid: 0
group: root
item:
mode: 504
path: dir1
mode: '0504'
owner: root
path: /tmp/dir1
size: 4096
state: directory
uid: 0
PLAY RECAP **************************************************************************************************
app0 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ stat /tmp/dir1
File: /tmp/dir1
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fe01h/65025d Inode: 2126076 Links: 2
Access: (0504/dr-x---r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-09-06 14:28:46.635247941 +0200
Modify: 2022-09-06 14:28:46.635247941 +0200
Change: 2022-09-06 14:28:46.635247941 +0200
Birth: 2022-09-06 14:28:46.635247941 +0200
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78715
|
https://github.com/ansible/ansible/pull/80112
|
af6d75e31363591921808f7f351185d11b7b429b
|
032881e4f1cbad1ca66b2fc40c8c56b17b33d965
| 2022-09-06T12:30:28Z |
python
| 2023-04-06T19:39:25Z |
lib/ansible/plugins/doc_fragments/files.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard files documentation fragment
# Note: mode is overridden by the copy and template modules so if you change the description
# here, you should also change it there.
DOCUMENTATION = r'''
options:
mode:
description:
- The permissions the resulting filesystem object should have.
- For those used to I(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives
a string and can do its own conversion from string into number.
- Giving Ansible a number without following one of these rules will end up with a decimal
number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or
C(u=rw,g=r,o=r)).
- If C(mode) is not specified and the destination filesystem object B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created filesystem object.
- If C(mode) is not specified and the destination filesystem object B(does) exist, the mode of the existing filesystem object will be used.
- Specifying C(mode) is the best way to ensure filesystem objects are created with the correct permissions.
See CVE-2020-1736 for further details.
type: raw
owner:
description:
- Name of the user that should own the filesystem object, as would be fed to I(chown).
- When left unspecified, it uses the current user unless you are root, in which
case it can preserve the previous ownership.
- Specifying a numeric username will be assumed to be a user ID and not a username. Avoid numeric usernames to avoid this confusion.
type: str
group:
description:
- Name of the group that should own the filesystem object, as would be fed to I(chown).
- When left unspecified, it uses the current group of the current user unless you are root,
in which case it can preserve the previous ownership.
type: str
seuser:
description:
- The user part of the SELinux filesystem object context.
- By default it uses the C(system) policy, where applicable.
- When set to C(_default), it will use the C(user) portion of the policy if available.
type: str
serole:
description:
- The role part of the SELinux filesystem object context.
- When set to C(_default), it will use the C(role) portion of the policy if available.
type: str
setype:
description:
- The type part of the SELinux filesystem object context.
- When set to C(_default), it will use the C(type) portion of the policy if available.
type: str
selevel:
description:
- The level part of the SELinux filesystem object context.
- This is the MLS/MCS attribute, sometimes known as the C(range).
- When set to C(_default), it will use the C(level) portion of the policy if available.
type: str
unsafe_writes:
description:
- Influence when to use atomic operation to prevent data corruption or inconsistent reads from the target filesystem object.
- By default this module uses atomic operations to prevent data corruption or inconsistent reads from the target filesystem objects,
but sometimes systems are configured or just broken in ways that prevent this. One example is docker mounted filesystem objects,
which cannot be updated atomically from inside the container and can only be written in an unsafe manner.
- This option allows Ansible to fall back to unsafe methods of updating filesystem objects when atomic operations fail
(however, it doesn't force Ansible to perform unsafe writes).
- IMPORTANT! Unsafe writes are subject to race conditions and can lead to data corruption.
type: bool
default: no
version_added: '2.2'
attributes:
description:
- The attributes the resulting filesystem object should have.
- To get supported flags look at the man page for I(chattr) on the target system.
- This string should contain the attributes in the same order as the one displayed by I(lsattr).
- The C(=) operator is assumed as default, otherwise C(+) or C(-) operators need to be included in the string.
type: str
aliases: [ attr ]
version_added: '2.3'
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,447 |
Update P() semantic markup example
|
### Summary
The new semantic markup examples for P() mistakenly still use the M() markup :-)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
### Ansible Version
```console
$ ansible --version
2.15
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80447
|
https://github.com/ansible/ansible/pull/80464
|
a84b3a4e7277084466e43236fa78fc99592c641a
|
5a44acc7049f709a4608f945ab3fe2ac4bbdff36
| 2023-04-06T19:42:43Z |
python
| 2023-04-10T18:30:11Z |
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
|
.. _developing_modules_documenting:
.. _module_documenting:
*******************************
Module format and documentation
*******************************
If you want to contribute your module to most Ansible collections, you must write your module in Python and follow the standard format described below. (Unless you're writing a Windows module, in which case the :ref:`Windows guidelines <developing_modules_general_windows>` apply.) In addition to following this format, you should review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
Every Ansible module written in Python must begin with seven standard sections in a particular order, followed by the code. The sections in order are:
.. contents::
:depth: 1
:local:
.. note:: Why don't the imports go first?
Keen Python programmers may notice that contrary to PEP 8's advice we don't put ``imports`` at the top of the file. This is because the ``DOCUMENTATION`` through ``RETURN`` sections are not used by the module code itself; they are essentially extra docstrings for the file. The imports are placed after these special variables for the same reason as PEP 8 puts the imports after the introductory comments and docstrings. This keeps the active parts of the code together and the pieces which are purely informational apart. The decision to exclude E402 is based on readability (which is what PEP 8 is about). Documentation strings in a module are much more similar to module level docstrings, than code, and are never utilized by the module itself. Placing the imports below this documentation and closer to the code, consolidates and groups all related code in a congruent manner to improve readability, debugging and understanding.
.. warning:: **Copy old modules with care!**
Some older Ansible modules have ``imports`` at the bottom of the file, ``Copyright`` notices with the full GPL prefix, and/or ``DOCUMENTATION`` fields in the wrong order. These are legacy files that need updating - do not copy them into new modules. Over time we are updating and correcting older modules. Please follow the guidelines on this page!
.. note:: For non-Python modules you still create a ``.py`` file for documentation purposes. Starting at ansible-core 2.14 you can instead choose to create a ``.yml`` file that has the same data structure, but in pure YAML.
With YAML files, the examples below are easy to use by removing Python quoting and substituting ``=`` for ``:``, for example ``DOCUMENTATION = r''' ... '''` ` to ``DOCUMENTATION: ...`` and removing closing quotes. :ref:`adjacent_yaml_doc`
.. _shebang:
Python shebang & UTF-8 coding
===============================
Begin your Ansible module with ``#!/usr/bin/python`` - this "shebang" allows ``ansible_python_interpreter`` to work. Follow the shebang immediately with ``# -*- coding: utf-8 -*-`` to clarify that the file is UTF-8 encoded.
.. note:: Using ``#!/usr/bin/env``, makes ``env`` the interpreter and bypasses ``ansible_<interpreter>_interpreter`` logic.
.. note:: If you develop the module using a different scripting language, adjust the interpreter accordingly (``#!/usr/bin/<interpreter>``) so ``ansible_<interpreter>_interpreter`` can work for that specific language.
.. note:: Binary modules do not require a shebang or an interpreter.
.. _copyright:
Copyright and license
=====================
After the shebang and UTF-8 coding, add a `copyright line <https://www.linuxfoundation.org/blog/copyright-notices-in-open-source-software-projects/>`_ with the original copyright holder and a license declaration. The license declaration should be ONLY one line, not the full GPL prefix.:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Contributors to the Ansible project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
Additions to the module (for instance, rewrites) are not permitted to add additional copyright lines other than the default copyright statement if missing:
.. code-block:: python
# Copyright: Contributors to the Ansible project
Any legal review will include the source control history, so an exhaustive copyright header is not necessary.
Please do not include a copyright year. If the existing copyright statement includes a year, do not edit the existing copyright year. Any existing copyright header should not be modified without permission from the copyright author.
.. _ansible_metadata_block:
ANSIBLE_METADATA block
======================
Since we moved to collections we have deprecated the METADATA functionality, it is no longer required for modules, but it will not break anything if present.
.. _documentation_block:
DOCUMENTATION block
===================
After the shebang, the UTF-8 coding, the copyright line, and the license section comes the ``DOCUMENTATION`` block. Ansible's online module documentation is generated from the ``DOCUMENTATION`` blocks in each module's source code. The ``DOCUMENTATION`` block must be valid YAML. You may find it easier to start writing your ``DOCUMENTATION`` string in an :ref:`editor with YAML syntax highlighting <other_tools_and_programs>` before you include it in your Python file. You can start by copying our `example documentation string <https://github.com/ansible/ansible/blob/devel/examples/DOCUMENTATION.yml>`_ into your module file and modifying it. If you run into syntax issues in your YAML, you can validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
Module documentation should briefly and accurately define what each module and option does, and how it works with others in the underlying system. Documentation should be written for broad audience--readable both by experts and non-experts.
* Descriptions should always start with a capital letter and end with a full stop. Consistency always helps.
* Verify that arguments in doc and module spec dict are identical.
* For password / secret arguments ``no_log=True`` should be set.
* For arguments that seem to contain sensitive information but **do not** contain secrets, such as "password_length", set ``no_log=False`` to disable the warning message.
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* If your module allows ``check_mode``, reflect this fact in the documentation.
To create clear, concise, consistent, and useful documentation, follow the :ref:`style guide <style_guide>`.
Each documentation field is described below. Before committing your module documentation, please test it at the command line and as HTML:
* As long as your module file is :ref:`available locally <local_modules>`, you can use ``ansible-doc -t module my_module_name`` to view your module documentation at the command line. Any parsing errors will be obvious - you can view details by adding ``-vvv`` to the command.
* You should also :ref:`test the HTML output <testing_module_documentation>` of your module documentation.
Documentation fields
--------------------
All fields in the ``DOCUMENTATION`` block are lower-case. All fields are required unless specified otherwise:
:module:
* The name of the module.
* Must be the same as the filename, without the ``.py`` extension.
:short_description:
* A short description which is displayed on the :ref:`list_of_collections` page and ``ansible-doc -l``.
* The ``short_description`` is displayed by ``ansible-doc -l`` without any category grouping,
so it needs enough detail to explain the module's purpose without the context of the directory structure in which it lives.
* Unlike ``description:``, ``short_description`` should not have a trailing period/full stop.
:description:
* A detailed description (generally two or more sentences).
* Must be written in full sentences, in other words, with capital letters and periods/full stops.
* Shouldn't mention the module name.
* Make use of multiple entries rather than using one long paragraph.
* Don't quote complete values unless it is required by YAML.
:version_added:
* The version of Ansible when the module was added.
* This is a string, and not a float, for example, ``version_added: '2.1'``.
* In collections, this must be the collection version the module was added to, not the Ansible version. For example, ``version_added: 1.0.0``.
:author:
* Name of the module author in the form ``First Last (@GitHubID)``.
* Use a multi-line list if there is more than one author.
* Don't use quotes as it should not be required by YAML.
:deprecated:
* Marks modules that will be removed in future releases. See also :ref:`module_lifecycle`.
:options:
* Options are often called `parameters` or `arguments`. Because the documentation field is called `options`, we will use that term.
* If the module has no options (for example, it's a ``_facts`` module), all you need is one line: ``options: {}``.
* If your module has options (in other words, accepts arguments), each option should be documented thoroughly. For each module option, include:
:option-name:
* Declarative operation (not CRUD), to focus on the final state, for example `online:`, rather than `is_online:`.
* The name of the option should be consistent with the rest of the module, as well as other modules in the same category.
* When in doubt, look for other modules to find option names that are used for the same purpose, we like to offer consistency to our users.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
* The first entry is a description of the option itself; subsequent entries detail its use, dependencies, or format of possible values.
* Should not list the possible values (that's what ``choices:`` is for, though it should explain what the values do if they aren't obvious).
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* Mutually exclusive options must be documented as the final sentence on each of the options.
:required:
* Only needed if ``true``.
* If missing, we assume the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible
(such as ``true``/``false`` or ``yes``/``no``). Document booleans as ``true``/``false`` for consistency and compatibility with ansible-lint.
:choices:
* List of option values.
* Should be absent if empty.
:type:
* Specifies the data type that option accepts, must match the ``argspec``.
* If an argument is ``type='bool'``, this field should be set to ``type: bool`` and no ``choices`` should be specified.
* If an argument is ``type='list'``, ``elements`` should be specified.
:elements:
* Specifies the data type for list elements in case ``type='list'``.
:aliases:
* List of optional name aliases.
* Generally not needed.
:version_added:
* Only needed if this option was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field.
* This is a string, and not a float, for example, ``version_added: '2.3'``.
* In collections, this must be the collection version the option was added to, not the Ansible version. For example, ``version_added: 1.0.0``.
:suboptions:
* If this option takes a dict or list of dicts, you can define the structure here.
* See :ref:`ansible_collections.azure.azcollection.azure_rm_securitygroup_module`, :ref:`ansible_collections.azure.azcollection.azure_rm_azurefirewall_module`, and :ref:`ansible_collections.openstack.cloud.baremetal_node_action_module` for examples.
:requirements:
* List of requirements (if applicable).
* Include minimum versions.
:seealso:
* A list of references to other modules, documentation or Internet resources
* In Ansible 2.10 and later, references to modules must use the FQCN or ``ansible.builtin`` for modules in ``ansible-core``.
* Plugin references are supported since ansible-core 2.15.
* A reference can be one of the following formats:
.. code-block:: yaml+jinja
seealso:
# Reference by module name
- module: cisco.aci.aci_tenant
# Reference by module name, including description
- module: cisco.aci.aci_tenant
description: ACI module to create tenants on a Cisco ACI fabric.
# Reference by plugin name
- plugin: ansible.builtin.file
plugin_type: lookup
# Reference by plugin name, including description
- plugin: ansible.builtin.file
plugin_type: lookup
description: You can use the ansible.builtin.file lookup to read files on the controller.
# Reference by rST documentation anchor
- ref: aci_guide
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by rST documentation anchor (with custom title)
- ref: The official Ansible ACI guide <aci_guide>
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by Internet resource
- name: APIC Management Information Model reference
description: Complete reference of the APIC object model.
link: https://developer.cisco.com/docs/apic-mim-ref/
* If you use ``ref:`` to link to an anchor that is not associated with a title, you must add a title to the ref for the link to work correctly.
* You can link to non-module plugins with ``ref:`` using the rST anchor, but plugin and module anchors are never associated with a title, so you must supply a title when you link to them. For example ``ref: namespace.collection.plugin_name lookup plugin <ansible_collections.namespace.collection.plugin_name_lookup>``.
:notes:
* Details of any important information that doesn't fit in one of the above sections.
* For example, whether ``check_mode`` is or is not supported.
.. _module_documents_linking:
Linking within module documentation
-----------------------------------
You can link from your module documentation to other module docs, other resources on docs.ansible.com, and resources elsewhere on the internet with the help of some pre-defined macros. The correct formats for these macros are:
* ``L()`` for links with a heading. For example: ``See L(Ansible Automation Platform,https://www.ansible.com/products/automation-platform).`` As of Ansible 2.10, do not use ``L()`` for relative links between Ansible documentation and collection documentation.
* ``U()`` for URLs. For example: ``See U(https://www.ansible.com/products/automation-platform) for an overview.``
* ``R()`` for cross-references with a heading (added in Ansible 2.10). For example: ``See R(Cisco IOS Platform Guide,ios_platform_options)``. Use the RST anchor for the cross-reference. See :ref:`adding_anchors_rst` for details.
* ``M()`` for module names. For example: ``See also M(ansible.builtin.yum) or M(community.general.apt_rpm)``. A FQCN **must** be used, short names will create broken links; use ``ansible.builtin`` for modules in ansible-core.
* ``P()`` for plugin names. For example: ``See also M(ansible.builtin.file#lookup) or M(community.general.json_query#filter)``. This is supported since ansible-core 2.15. FQCNs must be used; use ``ansible.builtin`` for plugins in ansible-core.
.. note::
For links between modules and documentation within a collection, you can use any of the options above. For links outside of your collection, use ``R()`` if available. Otherwise, use ``U()`` or ``L()`` with full URLs (not relative links). For modules, use ``M()`` with the FQCN or ``ansible.builtin`` as shown in the example. If you are creating your own documentation site, you will need to use the `intersphinx extension <https://www.sphinx-doc.org/en/master/usage/extensions/intersphinx.html>`_ to convert ``R()`` and ``M()`` to the correct links.
.. note::
To refer to a group of modules in a collection, use ``R()``. When a collection is not the right granularity, use ``C(..)``:
- ``Refer to the R(kubernetes.core collection, plugins_in_kubernetes.core) for information on managing kubernetes clusters.``
- ``The C(win_*) modules (spread across several collections) allow you to manage various aspects of windows hosts.``
.. note::
Because it stands out better, use ``seealso`` for general references over the use of notes or adding links to the description.
.. _semantic_markup:
Semantic markup within module documentation
-------------------------------------------
You can use semantic markup to highlight option names, option values, and environment variables. The markup processor formats these highlighted terms in a uniform way. With semantic markup, we can modify how the output looks without changing underlying code.
The correct formats for semantic markup are as follows:
* ``O()`` for option names, whether mentioned alone or with values. For example: ``Required if O(state=present).`` and ``Use with O(force) to require secure access.``
* ``V()`` for option values when mentioned alone. For example: ``Possible values include V(monospace) and V(pretty).``
* ``RV()`` for return value names, whether mentioned alone or with values. For example: ``The module returns RV(changed=true) in case of changes.`` and ``Use the RV(stdout) return value for standard output.``
* ``E()`` for environment variables. For example: ``If not set, the environment variable E(ACME_PASSWORD) will be used.``
The parameters for these formatting functions can use escaping with backslashes: ``V(foo(bar="a\\b"\), baz)`` results in the formatted value ``foo(bar="a\b"), baz)``.
Rules for using ``O()`` and ``RV()`` are very strict. You must follow syntax rules so that documentation renderers can create hyperlinks for the options and return values, respectively.
The allowed syntaxes are as follows:
- To reference an option for the current plugin/module, or the entrypoint of the current role (inside role entrypoint documentation), use ``O(option)`` and ``O(option=name)``.
- To reference an option for another entrypoint ``entrypoint`` from inside role documentation, use ``O(entrypoint:option)`` and ``O(entrypoint:option=name)``. The entrypoint information can be ignored by the documentation renderer, turned into a link to that entrypoint, or even directly to the option of that entrypoint.
- To reference an option for *another* plugin/module ``plugin.fqcn.name`` of type ``type``, use ``O(plugin.fqcn.name#type:option)`` and ``O(plugin.fqcn.name#type:option=name)``. For modules, use ``type=module``. The FQCN and plugin type can be ignored by the documentation renderer, turned into a link to that plugin, or even directly to the option of that plugin.
- To reference an option for entrypoint ``entrypoint`` of *another* role ``role.fqcn.name``, use ``O(role.fqcn.name#role:entrypoint:option)`` and ``O(role.fqcn.name#role:entrypoint:option=name)``. The FQCN and entrypoint information can be ignored by the documentation renderer, turned into a link to that entrypoint, or even directly to the option of that entrypoint.
- To reference options that do not exist (for example, options that were removed in an earlier version), use ``O(ignore:option)`` and ``O(ignore:option=name)``. The ``ignore:`` part will not be shown to the user by documentation rendering.
Option names can refer to suboptions by listing the path to the option separated by dots. For example, if you have an option ``foo`` with suboption ``bar``, then you must use ``O(foo.bar)`` to reference that suboption. You can add array indications like ``O(foo[].bar)`` or even ``O(foo[-1].bar)`` to indicate specific list elements. Everything between ``[`` and ``]`` pairs will be ignored to determine the real name of the option. For example, ``O(foo[foo | length - 1].bar[])`` results in the same link as ``O(foo.bar)``, but the text ``foo[foo | length - 1].bar[]`` displays instead of ``foo.bar``.
The same syntaxes can be used for ``RV()``, except that these will refer to return value names instead of option names; for example ``RV(ansible.builtin.service_facts#module:ansible_facts.services)`` refers to the :ref:`ansible_facts.services fact <ansible_collections.ansible.builtin.service_facts_module__return-ansible_facts/services>` returned by the :ref:`ansible.builtin.service_facts module <ansible_collections.ansible.builtin.service_facts_module>`.
Format macros within module documentation
-----------------------------------------
While it is possible to use standard Ansible formatting macros to control the look of other terms in module documentation, you should do so sparingly.
Possible macros include the following:
* ``C()`` for ``monospace`` (code) text. For example: ``This module functions like the unix command C(foo).``
* ``B()`` for bold text.
* ``I()`` for italic text.
* ``HORIZONTALLINE`` for a horizontal rule (the ``<hr>`` html tag) to separate long descriptions.
Note that ``C()``, ``B()``, and ``I()`` do **not allow escaping**, and thus cannot contain the value ``)`` as it always ends the formatting sequence. If you need to use ``)`` inside ``C()``, we recommend to use ``V()`` instead; see the above section on semantic markup.
.. _module_docs_fragments:
Documentation fragments
-----------------------
If you are writing multiple related modules, they may share common documentation, such as authentication details, file mode settings, ``notes:`` or ``seealso:`` entries. Rather than duplicate that information in each module's ``DOCUMENTATION`` block, you can save it once as a doc_fragment plugin and use it in each module's documentation. In Ansible, shared documentation fragments are contained in a ``ModuleDocFragment`` class in `lib/ansible/plugins/doc_fragments/ <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/doc_fragments>`_ or the equivalent directory in a collection. To include a documentation fragment, add ``extends_documentation_fragment: FRAGMENT_NAME`` in your module documentation. Use the fully qualified collection name for the FRAGMENT_NAME (for example, ``kubernetes.core.k8s_auth_options``).
Modules should only use items from a doc fragment if the module will implement all of the interface documented there in a manner that behaves the same as the existing modules which import that fragment. The goal is that items imported from the doc fragment will behave identically when used in another module that imports the doc fragment.
By default, only the ``DOCUMENTATION`` property from a doc fragment is inserted into the module documentation. It is possible to define additional properties in the doc fragment in order to import only certain parts of a doc fragment or mix and match as appropriate. If a property is defined in both the doc fragment and the module, the module value overrides the doc fragment.
Here is an example doc fragment named ``example_fragment.py``:
.. code-block:: python
class ModuleDocFragment(object):
# Standard documentation
DOCUMENTATION = r'''
options:
# options here
'''
# Additional section
OTHER = r'''
options:
# other options here
'''
To insert the contents of ``OTHER`` in a module:
.. code-block:: yaml+jinja
extends_documentation_fragment: example_fragment.other
Or use both :
.. code-block:: yaml+jinja
extends_documentation_fragment:
- example_fragment
- example_fragment.other
.. _note:
* Prior to Ansible 2.8, documentation fragments were kept in ``lib/ansible/utils/module_docs_fragments``.
.. versionadded:: 2.8
Since Ansible 2.8, you can have user-supplied doc_fragments by using a ``doc_fragments`` directory adjacent to play or role, just like any other plugin.
For example, all AWS modules should include:
.. code-block:: yaml+jinja
extends_documentation_fragment:
- aws
- ec2
:ref:`docfragments_collections` describes how to incorporate documentation fragments in a collection.
.. _examples_block:
EXAMPLES block
==============
After the shebang, the UTF-8 coding, the copyright line, the license section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module.
Per playbook best practices, each example should include a ``name:`` line:
.. code-block:: text
EXAMPLES = r'''
- name: Ensure foo is installed
namespace.collection.modulename:
name: foo
state: present
'''
The ``name:`` line should be capitalized and not include a trailing dot.
Use a fully qualified collection name (FQCN) as a part of the module's name like in the example above. For modules in ``ansible-core``, use the ``ansible.builtin.`` identifier, for example ``ansible.builtin.debug``.
If your examples use boolean options, use yes/no values. Since the documentation generates boolean values as yes/no, having the examples use these values as well makes the module documentation more consistent.
If your module returns facts that are often needed, an example of how to use them can be helpful.
.. _return_block:
RETURN block
============
After the shebang, the UTF-8 coding, the copyright line, the license section, ``DOCUMENTATION`` and ``EXAMPLES`` blocks comes the ``RETURN`` block. This section documents the information the module returns for use by other modules.
If your module doesn't return anything (apart from the standard returns), this section of your module should read: ``RETURN = r''' # '''``
Otherwise, for each value returned, provide the following fields. All fields are required unless specified otherwise.
:return name:
Name of the returned field.
:description:
Detailed description of what this value represents. Capitalized and with trailing dot.
:returned:
When this value is returned, such as ``always``, ``changed`` or ``success``. This is a string and can contain any human-readable content.
:type:
Data type.
:elements:
If ``type='list'``, specifies the data type of the list's elements.
:sample:
One or more examples.
:version_added:
Only needed if this return was extended after initial Ansible release, in other words, this is greater than the top level `version_added` field.
This is a string, and not a float, for example, ``version_added: '2.3'``.
:contains:
Optional. To describe nested return values, set ``type: dict``, or ``type: list``/``elements: dict``, or if you really have to, ``type: complex``, and repeat the elements above for each sub-field.
Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field:
.. code-block:: text
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
'''
RETURN = r'''
packages:
description: Information about package requirements.
returned: success
type: dict
contains:
missing:
description: Packages that are missing from the system.
returned: success
type: list
elements: str
sample:
- libmysqlclient-dev
- libxml2-dev
badversion:
description: Packages that are installed but at bad versions.
returned: success
type: list
elements: dict
sample:
- package: libxml2-dev
version: 2.9.4+dfsg1-2
constraint: ">= 3.0"
'''
.. _python_imports:
Python imports
==============
After the shebang, the UTF-8 coding, the copyright line, the license, and the sections for ``DOCUMENTATION``, ``EXAMPLES``, and ``RETURN``, you can finally add the python imports. All modules must use Python imports in the form:
.. code-block:: python
from module_utils.basic import AnsibleModule
The use of "wildcard" imports such as ``from module_utils.basic import *`` is no longer allowed.
.. _dev_testing_module_documentation:
Testing module documentation
============================
To test Ansible documentation locally please :ref:`follow instruction<testing_module_documentation>`. To test documentation in collections, please see :ref:`build_collection_docsite`.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,128 |
File module "a=,u=rX", X improperly preserves "x" on files.
|
### Summary
The file module "mode: a=,ug=rX" seems to behave differently than "chmod" for the same mode.
If you have a file that is mode 755 and you tell the file module "mode: a=,ug=rX", the mode will be set to 550. If, however, that file starts as mode 644, the mode will be set to 440. Chmod with that same mode string in both cases results in 440. It seems to be preserving the previous "x" status.
It seems to be related to the "a=", because without that file and chmod work the same.
The workaround is to use "ug=rX,o=" rather than "a=,ug=rX", but Ansible probably wants to match chmod in this.
### Issue Type
Bug Report
### Component Name
file
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel cc8e6d06d0) last updated 2023/03/02 15:23:16 (GMT -600)
config file = None
configured module search path = ['/home/sean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/sean/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin//ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = lvim
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Setup the directory and file
shell: "rm -rf test_directory; mkdir test_directory; date >test_directory/test_file"
- name: Change mode of file to 755
command: "chmod 755 test_directory/test_file"
- name: Starting permissions
shell: "ls -la test_directory/test_file >/dev/tty"
- name: File module sets mode a=,ug=rX
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: After file module sets mode to a=,ug=rX
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Use chmod to do the same chmod as the file module just did
command: "chmod -R a=,ug=rX test_directory/test_file"
- name: The permissions should be the same here (same mode specified).
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Run file module again, does it produce same results as last time?
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: 'This should be the same as "File module sets mode" above, same permissions were given'
shell: "ls -la test_directory/test_file >/dev/tty"
```
### Expected Results
"file" set test_file to mode 550, "chmod" with same mode sets it to 440, "file" run again with the same mode argument leaves the mode at 440, unlike the first pass.
I would expect "a=,..." to set a specific mode, which is what chmod does.
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [127.0.0.1] ****************************************************************************
TASK [Setup the directory and file] *********************************************************
[WARNING]: Consider using the file module with state=absent rather than running 'rm'. If
you need to use command because file is insufficient you can add 'warn: false' to this
command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Change mode of file to 755] ***********************************************************
[WARNING]: Consider using the file module with mode rather than running 'chmod'. If you
need to use command because file is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Starting permissions] *****************************************************************
-rwxr-xr-x 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [File module sets mode a=,ug=rX] *******************************************************
changed: [127.0.0.1]
TASK [After file module sets mode to a=,ug=rX] **********************************************
-r-xr-x--- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Use chmod to do the same chmod as the file module just did] ***************************
changed: [127.0.0.1]
TASK [The permissions should be the same here (same mode specified).] ***********************
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Run file module again, does it produce same results as last time?] ********************
ok: [127.0.0.1]
TASK [This should be the same as "File module sets mode" above, same permissions were given] ***
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
PLAY RECAP **********************************************************************************
127.0.0.1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80128
|
https://github.com/ansible/ansible/pull/80132
|
f9534fd7b7e8c7f3314d68f62025ebc9499a72f5
|
243aea45cea543fc1ef7c43d380a68aa1c7b338a
| 2023-03-02T22:30:26Z |
python
| 2023-04-10T22:29:10Z |
changelogs/fragments/80128-symbolic-modes-X-use-computed.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,128 |
File module "a=,u=rX", X improperly preserves "x" on files.
|
### Summary
The file module "mode: a=,ug=rX" seems to behave differently than "chmod" for the same mode.
If you have a file that is mode 755 and you tell the file module "mode: a=,ug=rX", the mode will be set to 550. If, however, that file starts as mode 644, the mode will be set to 440. Chmod with that same mode string in both cases results in 440. It seems to be preserving the previous "x" status.
It seems to be related to the "a=", because without that file and chmod work the same.
The workaround is to use "ug=rX,o=" rather than "a=,ug=rX", but Ansible probably wants to match chmod in this.
### Issue Type
Bug Report
### Component Name
file
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel cc8e6d06d0) last updated 2023/03/02 15:23:16 (GMT -600)
config file = None
configured module search path = ['/home/sean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/sean/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin//ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = lvim
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Setup the directory and file
shell: "rm -rf test_directory; mkdir test_directory; date >test_directory/test_file"
- name: Change mode of file to 755
command: "chmod 755 test_directory/test_file"
- name: Starting permissions
shell: "ls -la test_directory/test_file >/dev/tty"
- name: File module sets mode a=,ug=rX
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: After file module sets mode to a=,ug=rX
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Use chmod to do the same chmod as the file module just did
command: "chmod -R a=,ug=rX test_directory/test_file"
- name: The permissions should be the same here (same mode specified).
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Run file module again, does it produce same results as last time?
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: 'This should be the same as "File module sets mode" above, same permissions were given'
shell: "ls -la test_directory/test_file >/dev/tty"
```
### Expected Results
"file" set test_file to mode 550, "chmod" with same mode sets it to 440, "file" run again with the same mode argument leaves the mode at 440, unlike the first pass.
I would expect "a=,..." to set a specific mode, which is what chmod does.
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [127.0.0.1] ****************************************************************************
TASK [Setup the directory and file] *********************************************************
[WARNING]: Consider using the file module with state=absent rather than running 'rm'. If
you need to use command because file is insufficient you can add 'warn: false' to this
command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Change mode of file to 755] ***********************************************************
[WARNING]: Consider using the file module with mode rather than running 'chmod'. If you
need to use command because file is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Starting permissions] *****************************************************************
-rwxr-xr-x 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [File module sets mode a=,ug=rX] *******************************************************
changed: [127.0.0.1]
TASK [After file module sets mode to a=,ug=rX] **********************************************
-r-xr-x--- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Use chmod to do the same chmod as the file module just did] ***************************
changed: [127.0.0.1]
TASK [The permissions should be the same here (same mode specified).] ***********************
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Run file module again, does it produce same results as last time?] ********************
ok: [127.0.0.1]
TASK [This should be the same as "File module sets mode" above, same permissions were given] ***
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
PLAY RECAP **********************************************************************************
127.0.0.1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80128
|
https://github.com/ansible/ansible/pull/80132
|
f9534fd7b7e8c7f3314d68f62025ebc9499a72f5
|
243aea45cea543fc1ef7c43d380a68aa1c7b338a
| 2023-03-02T22:30:26Z |
python
| 2023-04-10T22:29:10Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info >= (3, 5)
_PY2_MIN = (2, 7) <= sys.version_info < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "ansible-core requires a minimum of Python2 version 2.7 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import tempfile
import time
import traceback
import types
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal, daemon as systemd_daemon
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
# check if the system is running under systemd
has_journal = hasattr(journal, 'sendv') and systemd_daemon.booted()
except (ImportError, AttributeError):
# AttributeError would be caused from use of .booted() if wrong systemd
has_journal = False
HAVE_SELINUX = False
try:
from ansible.module_utils.compat import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.arg_spec import ModuleArgumentSpecValidator
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
import hashlib
def _get_available_hash_algorithms():
"""Return a dictionary of available hash function names and their associated function."""
try:
# Algorithms available in Python 2.7.9+ and Python 3.2+
# https://docs.python.org/2.7/library/hashlib.html#hashlib.algorithms_available
# https://docs.python.org/3.2/library/hashlib.html#hashlib.algorithms_available
algorithm_names = hashlib.algorithms_available
except AttributeError:
# Algorithms in Python 2.7.x (used only for Python 2.7.0 through 2.7.8)
# https://docs.python.org/2.7/library/hashlib.html#hashlib.hashlib.algorithms
algorithm_names = set(hashlib.algorithms)
algorithms = {}
for algorithm_name in algorithm_names:
algorithm_func = getattr(hashlib, algorithm_name, None)
if algorithm_func:
try:
# Make sure the algorithm is actually available for use.
# Not all algorithms listed as available are actually usable.
# For example, md5 is not available in FIPS mode.
algorithm_func()
except Exception:
pass
else:
algorithms[algorithm_name] = algorithm_func
return algorithms
AVAILABLE_HASH_ALGORITHMS = _get_available_hash_algorithms()
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
env_fallback,
remove_values,
sanitize_keys,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.errors import AnsibleFallbackNotFound, AnsibleValidationErrorMultiple, UnsupportedError
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode # type: ignore[used-before-def] # pylint: disable=used-before-assignment
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring # type: ignore[used-before-def,has-type] # pylint: disable=used-before-assignment
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False, fallback=(env_fallback, ['ANSIBLE_UNSAFE_WRITES'])), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:prev_begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
# Save parameter values that should never be logged
self.no_log_values = set()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._load_params()
self._set_internal_properties()
self.validator = ModuleArgumentSpecValidator(self.argument_spec,
self.mutually_exclusive,
self.required_together,
self.required_one_of,
self.required_if,
self.required_by,
)
self.validation_result = self.validator.validate(self.params)
self.params.update(self.validation_result.validated_parameters)
self.no_log_values.update(self.validation_result._no_log_values)
self.aliases.update(self.validation_result._aliases)
try:
error = self.validation_result.errors[0]
except IndexError:
error = None
# Fail for validation errors, even in check mode
if error:
msg = self.validation_result.errors.msg
if isinstance(error, UnsupportedError):
msg = "Unsupported parameters for ({name}) {kind}: {msg}".format(name=self._name, kind='module', msg=msg)
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not self.no_log:
self._log_invocation()
# selinux state caching
self._selinux_enabled = None
self._selinux_mls_enabled = None
self._selinux_initial_context = None
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if self._selinux_mls_enabled is None:
self._selinux_mls_enabled = HAVE_SELINUX and selinux.is_selinux_mls_enabled() == 1
return self._selinux_mls_enabled
def selinux_enabled(self):
if self._selinux_enabled is None:
self._selinux_enabled = HAVE_SELINUX and selinux.is_selinux_enabled() == 1
return self._selinux_enabled
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
if self._selinux_initial_context is None:
self._selinux_initial_context = [None, None, None]
if self.selinux_mls_enabled():
self._selinux_initial_context.append(None)
return self._selinux_initial_context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
'''
Takes a path and returns it's mount point
:param path: a string type with a filesystem path
:returns: the path to the mount point as a text type
'''
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
path_stat = os.lstat(b_path)
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# https://docs.python.org/3/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'best' locale, per the function
# final fallback is 'C', which may cause unicode issues
# but is preferable to simply failing on unknown locale
best_locale = get_best_parsable_locale(self)
# need to set several since many tools choose to ignore documented precedence and scope
locale.setlocale(locale.LC_ALL, best_locale)
os.environ['LANG'] = best_locale
os.environ['LC_ALL'] = best_locale
os.environ['LC_MESSAGES'] = best_locale
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
name, value = (arg.upper(), str(log_args[arg]))
if name in (
'PRIORITY', 'MESSAGE', 'MESSAGE_ID',
'CODE_FILE', 'CODE_LINE', 'CODE_FUNC',
'SYSLOG_FACILITY', 'SYSLOG_IDENTIFIER',
'SYSLOG_PID',
):
name = "_%s" % name
journal_args.append((name, value))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp().
# Traceback would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True, handle_exceptions=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* environ variables with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:kw handle_exceptions: This flag indicates whether an exception will
be handled inline and issue a failed_json or if the caller should
handle it.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
env = os.environ.copy()
# We can set this from both an attribute and per call
env.update(self.run_command_environ_update or {})
env.update(environ_update or {})
if path_prefix:
path = env.get('PATH', '')
if path:
env['PATH'] = "%s:%s" % (path_prefix, path)
else:
env['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in env:
pypaths = [x for x in env['PYTHONPATH'].split(':')
if x and
not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
if pypaths and any(pypaths):
env['PYTHONPATH'] = ':'.join(pypaths)
if data:
st_in = subprocess.PIPE
def preexec():
self._restore_signal_handlers()
if umask:
os.umask(umask)
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=preexec,
env=env,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# make sure we're in the right working directory
if cwd:
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
if os.path.isdir(cwd):
kwargs['cwd'] = cwd
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
if not prompt_re:
stdout, stderr = cmd.communicate(input=data)
else:
# We only need this to look for a prompt, to abort instead of hanging
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
if handle_exceptions:
self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args))
else:
raise e
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
if handle_exceptions:
self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
else:
raise e
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,128 |
File module "a=,u=rX", X improperly preserves "x" on files.
|
### Summary
The file module "mode: a=,ug=rX" seems to behave differently than "chmod" for the same mode.
If you have a file that is mode 755 and you tell the file module "mode: a=,ug=rX", the mode will be set to 550. If, however, that file starts as mode 644, the mode will be set to 440. Chmod with that same mode string in both cases results in 440. It seems to be preserving the previous "x" status.
It seems to be related to the "a=", because without that file and chmod work the same.
The workaround is to use "ug=rX,o=" rather than "a=,ug=rX", but Ansible probably wants to match chmod in this.
### Issue Type
Bug Report
### Component Name
file
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel cc8e6d06d0) last updated 2023/03/02 15:23:16 (GMT -600)
config file = None
configured module search path = ['/home/sean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /tmp/ansible/lib/ansible
ansible collection location = /home/sean/.ansible/collections:/usr/share/ansible/collections
executable location = /tmp/ansible/bin//ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = lvim
```
### OS / Environment
Ubuntu 22.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- name: Setup the directory and file
shell: "rm -rf test_directory; mkdir test_directory; date >test_directory/test_file"
- name: Change mode of file to 755
command: "chmod 755 test_directory/test_file"
- name: Starting permissions
shell: "ls -la test_directory/test_file >/dev/tty"
- name: File module sets mode a=,ug=rX
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: After file module sets mode to a=,ug=rX
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Use chmod to do the same chmod as the file module just did
command: "chmod -R a=,ug=rX test_directory/test_file"
- name: The permissions should be the same here (same mode specified).
shell: "ls -la test_directory/test_file >/dev/tty"
- name: Run file module again, does it produce same results as last time?
file:
path: test_directory/test_file
mode: a=,ug=rX
- name: 'This should be the same as "File module sets mode" above, same permissions were given'
shell: "ls -la test_directory/test_file >/dev/tty"
```
### Expected Results
"file" set test_file to mode 550, "chmod" with same mode sets it to 440, "file" run again with the same mode argument leaves the mode at 440, unlike the first pass.
I would expect "a=,..." to set a specific mode, which is what chmod does.
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [127.0.0.1] ****************************************************************************
TASK [Setup the directory and file] *********************************************************
[WARNING]: Consider using the file module with state=absent rather than running 'rm'. If
you need to use command because file is insufficient you can add 'warn: false' to this
command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Change mode of file to 755] ***********************************************************
[WARNING]: Consider using the file module with mode rather than running 'chmod'. If you
need to use command because file is insufficient you can add 'warn: false' to this command
task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [127.0.0.1]
TASK [Starting permissions] *****************************************************************
-rwxr-xr-x 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [File module sets mode a=,ug=rX] *******************************************************
changed: [127.0.0.1]
TASK [After file module sets mode to a=,ug=rX] **********************************************
-r-xr-x--- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Use chmod to do the same chmod as the file module just did] ***************************
changed: [127.0.0.1]
TASK [The permissions should be the same here (same mode specified).] ***********************
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
TASK [Run file module again, does it produce same results as last time?] ********************
ok: [127.0.0.1]
TASK [This should be the same as "File module sets mode" above, same permissions were given] ***
-r--r----- 1 sean sean 32 Mar 2 15:25 test_directory/test_file
changed: [127.0.0.1]
PLAY RECAP **********************************************************************************
127.0.0.1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80128
|
https://github.com/ansible/ansible/pull/80132
|
f9534fd7b7e8c7f3314d68f62025ebc9499a72f5
|
243aea45cea543fc1ef7c43d380a68aa1c7b338a
| 2023-03-02T22:30:26Z |
python
| 2023-04-10T22:29:10Z |
test/units/modules/test_copy.py
|
# -*- coding: utf-8 -*-
# Copyright:
# (c) 2018 Ansible Project
# License: GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.modules.copy import AnsibleModuleError, split_pre_existing_dir
from ansible.module_utils.basic import AnsibleModule
THREE_DIRS_DATA = (('/dir1/dir2',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1', 'dir2']),
# 2 existing dirs:
('/dir1', ['dir2']),
# 3 existing dirs:
('/dir1/dir2', [])
),
('/dir1/dir2/',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1', 'dir2']),
# 2 existing dirs:
('/dir1', ['dir2']),
# 3 existing dirs:
('/dir1/dir2', [])
),
)
TWO_DIRS_DATA = (('dir1/dir2',
# 0 existing dirs:
('.', ['dir1', 'dir2']),
# 1 existing dir:
('dir1', ['dir2']),
# 2 existing dirs:
('dir1/dir2', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('dir1/dir2/',
# 0 existing dirs:
('.', ['dir1', 'dir2']),
# 1 existing dir:
('dir1', ['dir2']),
# 2 existing dirs:
('dir1/dir2', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('/dir1',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1']),
# 2 existing dirs:
('/dir1', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('/dir1/',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1']),
# 2 existing dirs:
('/dir1', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
) + THREE_DIRS_DATA
ONE_DIR_DATA = (('dir1',
# 0 existing dirs:
('.', ['dir1']),
# 1 existing dir:
('dir1', []),
# 2 existing dirs: Same as 1 because we never get to the third
),
('dir1/',
# 0 existing dirs:
('.', ['dir1']),
# 1 existing dir:
('dir1', []),
# 2 existing dirs: Same as 1 because we never get to the third
),
) + TWO_DIRS_DATA
@pytest.mark.parametrize('directory, expected', ((d[0], d[4]) for d in THREE_DIRS_DATA))
def test_split_pre_existing_dir_three_levels_exist(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, True, True])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory, expected', ((d[0], d[3]) for d in TWO_DIRS_DATA))
def test_split_pre_existing_dir_two_levels_exist(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, True, False])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory, expected', ((d[0], d[2]) for d in ONE_DIR_DATA))
def test_split_pre_existing_dir_one_level_exists(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, False, False])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory', (d[0] for d in ONE_DIR_DATA if d[1] is None))
def test_split_pre_existing_dir_root_does_not_exist(directory, mocker):
mocker.patch('os.path.exists', return_value=False)
with pytest.raises(AnsibleModuleError) as excinfo:
split_pre_existing_dir(directory)
assert excinfo.value.results['msg'].startswith("The '/' directory doesn't exist on this machine.")
@pytest.mark.parametrize('directory, expected', ((d[0], d[1]) for d in ONE_DIR_DATA if not d[0].startswith('/')))
def test_split_pre_existing_dir_working_dir_exists(directory, expected, mocker):
mocker.patch('os.path.exists', return_value=False)
split_pre_existing_dir(directory) == expected
#
# Info helpful for making new test cases:
#
# base_mode = {'dir no perms': 0o040000,
# 'file no perms': 0o100000,
# 'dir all perms': 0o400000 | 0o777,
# 'file all perms': 0o100000, | 0o777}
#
# perm_bits = {'x': 0b001,
# 'w': 0b010,
# 'r': 0b100}
#
# role_shift = {'u': 6,
# 'g': 3,
# 'o': 0}
DATA = ( # Going from no permissions to setting all for user, group, and/or other
(0o040000, u'a+rwx', 0o0777),
(0o040000, u'u+rwx,g+rwx,o+rwx', 0o0777),
(0o040000, u'o+rwx', 0o0007),
(0o040000, u'g+rwx', 0o0070),
(0o040000, u'u+rwx', 0o0700),
# Going from all permissions to none for user, group, and/or other
(0o040777, u'a-rwx', 0o0000),
(0o040777, u'u-rwx,g-rwx,o-rwx', 0o0000),
(0o040777, u'o-rwx', 0o0770),
(0o040777, u'g-rwx', 0o0707),
(0o040777, u'u-rwx', 0o0077),
# now using absolute assignment from None to a set of perms
(0o040000, u'a=rwx', 0o0777),
(0o040000, u'u=rwx,g=rwx,o=rwx', 0o0777),
(0o040000, u'o=rwx', 0o0007),
(0o040000, u'g=rwx', 0o0070),
(0o040000, u'u=rwx', 0o0700),
# X effect on files and dirs
(0o040000, u'a+X', 0o0111),
(0o100000, u'a+X', 0),
(0o040000, u'a=X', 0o0111),
(0o100000, u'a=X', 0),
(0o040777, u'a-X', 0o0666),
# Same as chmod but is it a bug?
# chmod a-X statfile <== removes execute from statfile
(0o100777, u'a-X', 0o0666),
# Multiple permissions
(0o040000, u'u=rw-x+X,g=r-x+X,o=r-x+X', 0o0755),
(0o100000, u'u=rw-x+X,g=r-x+X,o=r-x+X', 0o0644),
)
UMASK_DATA = (
(0o100000, '+rwx', 0o770),
(0o100777, '-rwx', 0o007),
)
INVALID_DATA = (
(0o040000, u'a=foo', "bad symbolic permission for mode: a=foo"),
(0o040000, u'f=rwx', "bad symbolic permission for mode: f=rwx"),
)
@pytest.mark.parametrize('stat_info, mode_string, expected', DATA)
def test_good_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == expected
@pytest.mark.parametrize('stat_info, mode_string, expected', UMASK_DATA)
def test_umask_with_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_umask = mocker.patch('os.umask')
mock_umask.return_value = 0o7
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == expected
@pytest.mark.parametrize('stat_info, mode_string, expected', INVALID_DATA)
def test_invalid_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
with pytest.raises(ValueError) as exc:
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == 'blah'
assert exc.match(expected)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,436 |
unit test TestSELinuxMU fails on OpenIndiana
|
### Summary
I'm trying to package ansible version 2.14.4 for OpenIndiana and when I run tests some TestSELinuxMU tests fails. Please note that OpenIndiana (a Solaris clone) have no selinux support.
### Issue Type
Bug Report
### Component Name
test_selinux.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/marcel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/vendor-packages/ansible
ansible collection location = /home/marcel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Feb 19 2023, 15:42:40) [GCC 10.4.0] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
$
```
### Configuration
```console
NA. Tests are run directly in the source directory after the source tarball is unpacked.
```
### OS / Environment
OpenIndiana
### Steps to Reproduce
```
$ bin/ansible-test units --python 3.9 --python-interpreter /usr/bin/python3.9 --local --color no --verbose
```
### Expected Results
All tests pass; selinux tests either pass, or are skipped.
### Actual Results
```console
Following tests fails:
FAILED test/units/module_utils/basic/test_selinux.py::TestSELinuxMU::test_selinux_mls_enabled
FAILED test/units/module_utils/basic/test_selinux.py::TestSELinuxMU::test_selinux_context
FAILED test/units/module_utils/basic/test_selinux.py::TestSELinuxMU::test_selinux_enabled
FAILED test/units/module_utils/basic/test_selinux.py::TestSELinuxMU::test_set_context_if_different
FAILED test/units/module_utils/basic/test_selinux.py::TestSELinuxMU::test_selinux_default_context
```
All failures are similar:
```
_________________ TestSELinuxMU.test_set_context_if_different __________________
[gw0] sunos5 -- Python 3.9.16 /usr/bin/python3.9
thing = <module 'ansible.module_utils.compat' from '/tmp/ansible-test-0qd9w7ab/ansible/module_utils/compat/__init__.py'>
comp = 'selinux', import_path = 'ansible.module_utils.compat.selinux'
def _dot_lookup(thing, comp, import_path):
try:
> return getattr(thing, comp)
E AttributeError: module 'ansible.module_utils.compat' has no attribute 'selinux'
/usr/lib/python3.9/unittest/mock.py:1226: AttributeError
During handling of the above exception, another exception occurred:
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible.module_utils.common.text.converters import to_native, to_bytes
from ctypes import CDLL, c_char_p, c_int, byref, POINTER, get_errno
try:
> _selinux_lib = CDLL('libselinux.so.1', use_errno=True)
/tmp/ansible-test-0qd9w7ab/ansible/module_utils/compat/selinux.py:14:
...
...
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80436
|
https://github.com/ansible/ansible/pull/80448
|
5ddd530d1dc55db5e7d584d27abed4d3f96ead34
|
2a795e5747791ef9f39655790dfe7575a7d7f1b9
| 2023-04-06T10:42:35Z |
python
| 2023-04-11T15:02:20Z |
test/units/module_utils/basic/test_selinux.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import errno
import json
import pytest
from units.compat.mock import mock_open, patch
from ansible.module_utils import basic
from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.six.moves import builtins
@pytest.fixture
def no_args_module_exec():
with patch.object(basic, '_ANSIBLE_ARGS', b'{"ANSIBLE_MODULE_ARGS": {}}'):
yield # we're patching the global module object, so nothing to yield
def no_args_module(selinux_enabled=None, selinux_mls_enabled=None):
am = basic.AnsibleModule(argument_spec={})
# just dirty-patch the wrappers on the object instance since it's short-lived
if isinstance(selinux_enabled, bool):
patch.object(am, 'selinux_enabled', return_value=selinux_enabled).start()
if isinstance(selinux_mls_enabled, bool):
patch.object(am, 'selinux_mls_enabled', return_value=selinux_mls_enabled).start()
return am
# test AnsibleModule selinux wrapper methods
@pytest.mark.usefixtures('no_args_module_exec')
class TestSELinuxMU:
def test_selinux_enabled(self):
# test selinux unavailable
# selinux unavailable, should return false
with patch.object(basic, 'HAVE_SELINUX', False):
assert no_args_module().selinux_enabled() is False
# test selinux present/not-enabled
disabled_mod = no_args_module()
with patch('ansible.module_utils.compat.selinux.is_selinux_enabled', return_value=0):
assert disabled_mod.selinux_enabled() is False
# ensure value is cached (same answer after unpatching)
assert disabled_mod.selinux_enabled() is False
# and present / enabled
enabled_mod = no_args_module()
with patch('ansible.module_utils.compat.selinux.is_selinux_enabled', return_value=1):
assert enabled_mod.selinux_enabled() is True
# ensure value is cached (same answer after unpatching)
assert enabled_mod.selinux_enabled() is True
def test_selinux_mls_enabled(self):
# selinux unavailable, should return false
with patch.object(basic, 'HAVE_SELINUX', False):
assert no_args_module().selinux_mls_enabled() is False
# selinux disabled, should return false
with patch('ansible.module_utils.compat.selinux.is_selinux_mls_enabled', return_value=0):
assert no_args_module(selinux_enabled=False).selinux_mls_enabled() is False
# selinux enabled, should pass through the value of is_selinux_mls_enabled
with patch('ansible.module_utils.compat.selinux.is_selinux_mls_enabled', return_value=1):
assert no_args_module(selinux_enabled=True).selinux_mls_enabled() is True
def test_selinux_initial_context(self):
# selinux missing/disabled/enabled sans MLS is 3-element None
assert no_args_module(selinux_enabled=False, selinux_mls_enabled=False).selinux_initial_context() == [None, None, None]
assert no_args_module(selinux_enabled=True, selinux_mls_enabled=False).selinux_initial_context() == [None, None, None]
# selinux enabled with MLS is 4-element None
assert no_args_module(selinux_enabled=True, selinux_mls_enabled=True).selinux_initial_context() == [None, None, None, None]
def test_selinux_default_context(self):
# selinux unavailable
with patch.object(basic, 'HAVE_SELINUX', False):
assert no_args_module().selinux_default_context(path='/foo/bar') == [None, None, None]
am = no_args_module(selinux_enabled=True, selinux_mls_enabled=True)
# matchpathcon success
with patch('ansible.module_utils.compat.selinux.matchpathcon', return_value=[0, 'unconfined_u:object_r:default_t:s0']):
assert am.selinux_default_context(path='/foo/bar') == ['unconfined_u', 'object_r', 'default_t', 's0']
# matchpathcon fail (return initial context value)
with patch('ansible.module_utils.compat.selinux.matchpathcon', return_value=[-1, '']):
assert am.selinux_default_context(path='/foo/bar') == [None, None, None, None]
# matchpathcon OSError
with patch('ansible.module_utils.compat.selinux.matchpathcon', side_effect=OSError):
assert am.selinux_default_context(path='/foo/bar') == [None, None, None, None]
def test_selinux_context(self):
# selinux unavailable
with patch.object(basic, 'HAVE_SELINUX', False):
assert no_args_module().selinux_context(path='/foo/bar') == [None, None, None]
am = no_args_module(selinux_enabled=True, selinux_mls_enabled=True)
# lgetfilecon_raw passthru
with patch('ansible.module_utils.compat.selinux.lgetfilecon_raw', return_value=[0, 'unconfined_u:object_r:default_t:s0']):
assert am.selinux_context(path='/foo/bar') == ['unconfined_u', 'object_r', 'default_t', 's0']
# lgetfilecon_raw returned a failure
with patch('ansible.module_utils.compat.selinux.lgetfilecon_raw', return_value=[-1, '']):
assert am.selinux_context(path='/foo/bar') == [None, None, None, None]
# lgetfilecon_raw OSError (should bomb the module)
with patch('ansible.module_utils.compat.selinux.lgetfilecon_raw', side_effect=OSError(errno.ENOENT, 'NotFound')):
with pytest.raises(SystemExit):
am.selinux_context(path='/foo/bar')
with patch('ansible.module_utils.compat.selinux.lgetfilecon_raw', side_effect=OSError()):
with pytest.raises(SystemExit):
am.selinux_context(path='/foo/bar')
def test_is_special_selinux_path(self):
args = to_bytes(json.dumps(dict(ANSIBLE_MODULE_ARGS={'_ansible_selinux_special_fs': "nfs,nfsd,foos",
'_ansible_remote_tmp': "/tmp",
'_ansible_keep_remote_files': False})))
with patch.object(basic, '_ANSIBLE_ARGS', args):
am = basic.AnsibleModule(
argument_spec=dict(),
)
def _mock_find_mount_point(path):
if path.startswith('/some/path'):
return '/some/path'
elif path.startswith('/weird/random/fstype'):
return '/weird/random/fstype'
return '/'
am.find_mount_point = _mock_find_mount_point
am.selinux_context = lambda path: ['foo_u', 'foo_r', 'foo_t', 's0']
m = mock_open()
m.side_effect = OSError
with patch.object(builtins, 'open', m, create=True):
assert am.is_special_selinux_path('/some/path/that/should/be/nfs') == (False, None)
mount_data = [
'/dev/disk1 / ext4 rw,seclabel,relatime,data=ordered 0 0\n',
'10.1.1.1:/path/to/nfs /some/path nfs ro 0 0\n',
'whatever /weird/random/fstype foos rw 0 0\n',
]
# mock_open has a broken readlines() implementation apparently...
# this should work by default but doesn't, so we fix it
m = mock_open(read_data=''.join(mount_data))
m.return_value.readlines.return_value = mount_data
with patch.object(builtins, 'open', m, create=True):
assert am.is_special_selinux_path('/some/random/path') == (False, None)
assert am.is_special_selinux_path('/some/path/that/should/be/nfs') == (True, ['foo_u', 'foo_r', 'foo_t', 's0'])
assert am.is_special_selinux_path('/weird/random/fstype/path') == (True, ['foo_u', 'foo_r', 'foo_t', 's0'])
def test_set_context_if_different(self):
am = no_args_module(selinux_enabled=False)
assert am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], True) is True
assert am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], False) is False
am = no_args_module(selinux_enabled=True, selinux_mls_enabled=True)
am.selinux_context = lambda path: ['bar_u', 'bar_r', None, None]
am.is_special_selinux_path = lambda path: (False, None)
with patch('ansible.module_utils.compat.selinux.lsetfilecon', return_value=0) as m:
assert am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], False) is True
m.assert_called_with('/path/to/file', 'foo_u:foo_r:foo_t:s0')
m.reset_mock()
am.check_mode = True
assert am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], False) is True
assert not m.called
am.check_mode = False
with patch('ansible.module_utils.compat.selinux.lsetfilecon', return_value=1):
with pytest.raises(SystemExit):
am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], True)
with patch('ansible.module_utils.compat.selinux.lsetfilecon', side_effect=OSError):
with pytest.raises(SystemExit):
am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], True)
am.is_special_selinux_path = lambda path: (True, ['sp_u', 'sp_r', 'sp_t', 's0'])
with patch('ansible.module_utils.compat.selinux.lsetfilecon', return_value=0) as m:
assert am.set_context_if_different('/path/to/file', ['foo_u', 'foo_r', 'foo_t', 's0'], False) is True
m.assert_called_with('/path/to/file', 'sp_u:sp_r:sp_t:s0')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,457 |
"test_copy" has misleading document comments
|
### Summary
The "test_copy.py" module has a bit shifted in the "dir all perms" example under "Info helpful for making new test cases", and also has a misplaced comma.
### Issue Type
Documentation Report
### Component Name
test/units/modules/test_copy.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.0.dev0] (fix_test_doc a84b3a4e72) last updated 2023/04/08 10:56:13 (GMT -600)
config file = None
configured module search path = ['/home/sean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sean/software/ansible-realgo/lib/ansible
ansible collection location = /home/sean/.ansible/collections:/usr/share/ansible/collections
executable location = bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = lvim
```
### OS / Environment
N/A
### Additional Information
PR incoming, diff goes into the details.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80457
|
https://github.com/ansible/ansible/pull/80458
|
a7d6fdda663be821e1ee1cd508f24209f5a458d8
|
d5f35783695d94937a0ffca1dc1843df06e5680f
| 2023-04-08T17:08:10Z |
python
| 2023-04-11T16:13:36Z |
test/units/modules/test_copy.py
|
# -*- coding: utf-8 -*-
# Copyright:
# (c) 2018 Ansible Project
# License: GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import pytest
from ansible.modules.copy import AnsibleModuleError, split_pre_existing_dir
from ansible.module_utils.basic import AnsibleModule
THREE_DIRS_DATA = (('/dir1/dir2',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1', 'dir2']),
# 2 existing dirs:
('/dir1', ['dir2']),
# 3 existing dirs:
('/dir1/dir2', [])
),
('/dir1/dir2/',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1', 'dir2']),
# 2 existing dirs:
('/dir1', ['dir2']),
# 3 existing dirs:
('/dir1/dir2', [])
),
)
TWO_DIRS_DATA = (('dir1/dir2',
# 0 existing dirs:
('.', ['dir1', 'dir2']),
# 1 existing dir:
('dir1', ['dir2']),
# 2 existing dirs:
('dir1/dir2', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('dir1/dir2/',
# 0 existing dirs:
('.', ['dir1', 'dir2']),
# 1 existing dir:
('dir1', ['dir2']),
# 2 existing dirs:
('dir1/dir2', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('/dir1',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1']),
# 2 existing dirs:
('/dir1', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
('/dir1/',
# 0 existing dirs: error (because / should always exist)
None,
# 1 existing dir:
('/', ['dir1']),
# 2 existing dirs:
('/dir1', []),
# 3 existing dirs: Same as 2 because we never get to the third
),
) + THREE_DIRS_DATA
ONE_DIR_DATA = (('dir1',
# 0 existing dirs:
('.', ['dir1']),
# 1 existing dir:
('dir1', []),
# 2 existing dirs: Same as 1 because we never get to the third
),
('dir1/',
# 0 existing dirs:
('.', ['dir1']),
# 1 existing dir:
('dir1', []),
# 2 existing dirs: Same as 1 because we never get to the third
),
) + TWO_DIRS_DATA
@pytest.mark.parametrize('directory, expected', ((d[0], d[4]) for d in THREE_DIRS_DATA))
def test_split_pre_existing_dir_three_levels_exist(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, True, True])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory, expected', ((d[0], d[3]) for d in TWO_DIRS_DATA))
def test_split_pre_existing_dir_two_levels_exist(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, True, False])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory, expected', ((d[0], d[2]) for d in ONE_DIR_DATA))
def test_split_pre_existing_dir_one_level_exists(directory, expected, mocker):
mocker.patch('os.path.exists', side_effect=[True, False, False])
split_pre_existing_dir(directory) == expected
@pytest.mark.parametrize('directory', (d[0] for d in ONE_DIR_DATA if d[1] is None))
def test_split_pre_existing_dir_root_does_not_exist(directory, mocker):
mocker.patch('os.path.exists', return_value=False)
with pytest.raises(AnsibleModuleError) as excinfo:
split_pre_existing_dir(directory)
assert excinfo.value.results['msg'].startswith("The '/' directory doesn't exist on this machine.")
@pytest.mark.parametrize('directory, expected', ((d[0], d[1]) for d in ONE_DIR_DATA if not d[0].startswith('/')))
def test_split_pre_existing_dir_working_dir_exists(directory, expected, mocker):
mocker.patch('os.path.exists', return_value=False)
split_pre_existing_dir(directory) == expected
#
# Info helpful for making new test cases:
#
# base_mode = {
# 'dir no perms': 0o040000,
# 'file no perms': 0o100000,
# 'dir all perms': 0o040000 | 0o777,
# 'file all perms': 0o100000 | 0o777}
#
# perm_bits = {
# 'x': 0b001,
# 'w': 0b010,
# 'r': 0b100}
#
# role_shift = {
# 'u': 6,
# 'g': 3,
# 'o': 0}
DATA = ( # Going from no permissions to setting all for user, group, and/or other
(0o040000, u'a+rwx', 0o0777),
(0o040000, u'u+rwx,g+rwx,o+rwx', 0o0777),
(0o040000, u'o+rwx', 0o0007),
(0o040000, u'g+rwx', 0o0070),
(0o040000, u'u+rwx', 0o0700),
# Going from all permissions to none for user, group, and/or other
(0o040777, u'a-rwx', 0o0000),
(0o040777, u'u-rwx,g-rwx,o-rwx', 0o0000),
(0o040777, u'o-rwx', 0o0770),
(0o040777, u'g-rwx', 0o0707),
(0o040777, u'u-rwx', 0o0077),
# now using absolute assignment from None to a set of perms
(0o040000, u'a=rwx', 0o0777),
(0o040000, u'u=rwx,g=rwx,o=rwx', 0o0777),
(0o040000, u'o=rwx', 0o0007),
(0o040000, u'g=rwx', 0o0070),
(0o040000, u'u=rwx', 0o0700),
# X effect on files and dirs
(0o040000, u'a+X', 0o0111),
(0o100000, u'a+X', 0),
(0o040000, u'a=X', 0o0111),
(0o100000, u'a=X', 0),
(0o040777, u'a-X', 0o0666),
# Same as chmod but is it a bug?
# chmod a-X statfile <== removes execute from statfile
(0o100777, u'a-X', 0o0666),
# Verify X uses computed not original mode
(0o100777, u'a=,u=rX', 0o0400),
(0o040777, u'a=,u=rX', 0o0500),
# Multiple permissions
(0o040000, u'u=rw-x+X,g=r-x+X,o=r-x+X', 0o0755),
(0o100000, u'u=rw-x+X,g=r-x+X,o=r-x+X', 0o0644),
)
UMASK_DATA = (
(0o100000, '+rwx', 0o770),
(0o100777, '-rwx', 0o007),
)
INVALID_DATA = (
(0o040000, u'a=foo', "bad symbolic permission for mode: a=foo"),
(0o040000, u'f=rwx', "bad symbolic permission for mode: f=rwx"),
)
@pytest.mark.parametrize('stat_info, mode_string, expected', DATA)
def test_good_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == expected
@pytest.mark.parametrize('stat_info, mode_string, expected', UMASK_DATA)
def test_umask_with_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_umask = mocker.patch('os.umask')
mock_umask.return_value = 0o7
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == expected
@pytest.mark.parametrize('stat_info, mode_string, expected', INVALID_DATA)
def test_invalid_symbolic_modes(mocker, stat_info, mode_string, expected):
mock_stat = mocker.MagicMock()
mock_stat.st_mode = stat_info
with pytest.raises(ValueError) as exc:
assert AnsibleModule._symbolic_mode_to_octal(mock_stat, mode_string) == 'blah'
assert exc.match(expected)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,506 |
Ansible 2.14.4 unexpectedly executes shell command in when using --syntax-check
|
### Summary
I suspect a regression in 2.14.4 because in our project, ansible-lint github action suddenly fails this week.
Digging further:
`ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup`
now fails with an error generated by dsctl command because 389ds 'localhost' instance does not exists
In other word it is the output of a command run by ansible.builtin.shell while ansible --help says that commands should not be executed.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/progier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/progier/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/progier/.ansible/collections:/usr/share/ansible/collections
executable location = /home/progier/.local/bin/ansible
python version = 3.10.9 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
$ansible-lint 6.14.5 using ansible 2.14.4
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 36 (Thirty Six)
### Steps to Reproduce
git clone https://github.com/389ds/ansible-ds
cd ansible_ds
ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup
### Expected Results
Either no error or an error that is not a shell command error message.
FYI: The error message and return code value is clearly generated by 389ds administration CLI
which should not be executed in --syntax-check mode:
```
dsctl localhost status ; echo return code is: $?
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)
return code is: 1
```
### Actual Results
```console
a quite long list of 'localhost | SKIPPED' and 'localhost | SUCCESS => {' then:
localhost | FAILED | rc=1 >>
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)non-zero return code
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80506
|
https://github.com/ansible/ansible/pull/80507
|
362c949622b637fb1a5e80b1b0bf780c1ac7e3b8
|
f3774ae7d4d5ca2c2d6b58adc4f2e03b724d2a6c
| 2023-04-12T17:59:30Z |
python
| 2023-04-12T19:24:34Z |
changelogs/fragments/80506-syntax-check-playbook-only.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,506 |
Ansible 2.14.4 unexpectedly executes shell command in when using --syntax-check
|
### Summary
I suspect a regression in 2.14.4 because in our project, ansible-lint github action suddenly fails this week.
Digging further:
`ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup`
now fails with an error generated by dsctl command because 389ds 'localhost' instance does not exists
In other word it is the output of a command run by ansible.builtin.shell while ansible --help says that commands should not be executed.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/progier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/progier/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/progier/.ansible/collections:/usr/share/ansible/collections
executable location = /home/progier/.local/bin/ansible
python version = 3.10.9 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
$ansible-lint 6.14.5 using ansible 2.14.4
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 36 (Thirty Six)
### Steps to Reproduce
git clone https://github.com/389ds/ansible-ds
cd ansible_ds
ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup
### Expected Results
Either no error or an error that is not a shell command error message.
FYI: The error message and return code value is clearly generated by 389ds administration CLI
which should not be executed in --syntax-check mode:
```
dsctl localhost status ; echo return code is: $?
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)
return code is: 1
```
### Actual Results
```console
a quite long list of 'localhost | SKIPPED' and 'localhost | SUCCESS => {' then:
localhost | FAILED | rc=1 >>
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)non-zero return code
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80506
|
https://github.com/ansible/ansible/pull/80507
|
362c949622b637fb1a5e80b1b0bf780c1ac7e3b8
|
f3774ae7d4d5ca2c2d6b58adc4f2e03b724d2a6c
| 2023-04-12T17:59:30Z |
python
| 2023-04-12T19:24:34Z |
lib/ansible/cli/arguments/option_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import operator
import argparse
import os
import os.path
import sys
import time
from jinja2 import __version__ as j2_version
import ansible
from ansible import constants as C
from ansible.module_utils._text import to_native
from ansible.module_utils.common.yaml import HAS_LIBYAML, yaml_load
from ansible.release import __version__
from ansible.utils.path import unfrackpath
#
# Special purpose OptionParsers
#
class SortingHelpFormatter(argparse.HelpFormatter):
def add_arguments(self, actions):
actions = sorted(actions, key=operator.attrgetter('option_strings'))
super(SortingHelpFormatter, self).add_arguments(actions)
class AnsibleVersion(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
ansible_version = to_native(version(getattr(parser, 'prog')))
print(ansible_version)
parser.exit()
class UnrecognizedArgument(argparse.Action):
def __init__(self, option_strings, dest, const=True, default=None, required=False, help=None, metavar=None, nargs=0):
super(UnrecognizedArgument, self).__init__(option_strings=option_strings, dest=dest, nargs=nargs, const=const,
default=default, required=required, help=help)
def __call__(self, parser, namespace, values, option_string=None):
parser.error('unrecognized arguments: %s' % option_string)
class PrependListAction(argparse.Action):
"""A near clone of ``argparse._AppendAction``, but designed to prepend list values
instead of appending.
"""
def __init__(self, option_strings, dest, nargs=None, const=None, default=None, type=None,
choices=None, required=False, help=None, metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != argparse.OPTIONAL:
raise ValueError('nargs must be %r to supply const' % argparse.OPTIONAL)
super(PrependListAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=nargs,
const=const,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar
)
def __call__(self, parser, namespace, values, option_string=None):
items = copy.copy(ensure_value(namespace, self.dest, []))
items[0:0] = values
setattr(namespace, self.dest, items)
def ensure_value(namespace, name, value):
if getattr(namespace, name, None) is None:
setattr(namespace, name, value)
return getattr(namespace, name)
#
# Callbacks to validate and normalize Options
#
def unfrack_path(pathsep=False, follow=True):
"""Turn an Option's data into a single path in Ansible locations"""
def inner(value):
if pathsep:
return [unfrackpath(x, follow=follow) for x in value.split(os.pathsep) if x]
if value == '-':
return value
return unfrackpath(value, follow=follow)
return inner
def maybe_unfrack_path(beacon):
def inner(value):
if value.startswith(beacon):
return beacon + unfrackpath(value[1:])
return value
return inner
def _git_repo_info(repo_path):
""" returns a string containing git branch, commit id and commit date """
result = None
if os.path.exists(repo_path):
# Check if the .git is a file. If it is a file, it means that we are in a submodule structure.
if os.path.isfile(repo_path):
try:
with open(repo_path) as f:
gitdir = yaml_load(f).get('gitdir')
# There is a possibility the .git file to have an absolute path.
if os.path.isabs(gitdir):
repo_path = gitdir
else:
repo_path = os.path.join(repo_path[:-4], gitdir)
except (IOError, AttributeError):
return ''
with open(os.path.join(repo_path, "HEAD")) as f:
line = f.readline().rstrip("\n")
if line.startswith("ref:"):
branch_path = os.path.join(repo_path, line[5:])
else:
branch_path = None
if branch_path and os.path.exists(branch_path):
branch = '/'.join(line.split('/')[2:])
with open(branch_path) as f:
commit = f.readline()[:10]
else:
# detached HEAD
commit = line[:10]
branch = 'detached HEAD'
branch_path = os.path.join(repo_path, "HEAD")
date = time.localtime(os.stat(branch_path).st_mtime)
if time.daylight == 0:
offset = time.timezone
else:
offset = time.altzone
result = "({0} {1}) last updated {2} (GMT {3:+04d})".format(branch, commit, time.strftime("%Y/%m/%d %H:%M:%S", date), int(offset / -36))
else:
result = ''
return result
def _gitinfo():
basedir = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..', '..', '..'))
repo_path = os.path.join(basedir, '.git')
return _git_repo_info(repo_path)
def version(prog=None):
""" return ansible version """
if prog:
result = ["{0} [core {1}]".format(prog, __version__)]
else:
result = [__version__]
gitinfo = _gitinfo()
if gitinfo:
result[0] = "{0} {1}".format(result[0], gitinfo)
result.append(" config file = %s" % C.CONFIG_FILE)
if C.DEFAULT_MODULE_PATH is None:
cpath = "Default w/o overrides"
else:
cpath = C.DEFAULT_MODULE_PATH
result.append(" configured module search path = %s" % cpath)
result.append(" ansible python module location = %s" % ':'.join(ansible.__path__))
result.append(" ansible collection location = %s" % ':'.join(C.COLLECTIONS_PATHS))
result.append(" executable location = %s" % sys.argv[0])
result.append(" python version = %s (%s)" % (''.join(sys.version.splitlines()), to_native(sys.executable)))
result.append(" jinja version = %s" % j2_version)
result.append(" libyaml = %s" % HAS_LIBYAML)
return "\n".join(result)
#
# Functions to add pre-canned options to an OptionParser
#
def create_base_parser(prog, usage="", desc=None, epilog=None):
"""
Create an options parser for all ansible scripts
"""
# base opts
parser = argparse.ArgumentParser(
prog=prog,
formatter_class=SortingHelpFormatter,
epilog=epilog,
description=desc,
conflict_handler='resolve',
)
version_help = "show program's version number, config file location, configured module search path," \
" module location, executable location and exit"
parser.add_argument('--version', action=AnsibleVersion, nargs=0, help=version_help)
add_verbosity_options(parser)
return parser
def add_verbosity_options(parser):
"""Add options for verbosity"""
parser.add_argument('-v', '--verbose', dest='verbosity', default=C.DEFAULT_VERBOSITY, action="count",
help="Causes Ansible to print more debug messages. Adding multiple -v will increase the verbosity, "
"the builtin plugins currently evaluate up to -vvvvvv. A reasonable level to start is -vvv, "
"connection debugging might require -vvvv.")
def add_async_options(parser):
"""Add options for commands which can launch async tasks"""
parser.add_argument('-P', '--poll', default=C.DEFAULT_POLL_INTERVAL, type=int, dest='poll_interval',
help="set the poll interval if using -B (default=%s)" % C.DEFAULT_POLL_INTERVAL)
parser.add_argument('-B', '--background', dest='seconds', type=int, default=0,
help='run asynchronously, failing after X seconds (default=N/A)')
def add_basedir_options(parser):
"""Add options for commands which can set a playbook basedir"""
parser.add_argument('--playbook-dir', default=C.PLAYBOOK_DIR, dest='basedir', action='store',
help="Since this tool does not use playbooks, use this as a substitute playbook directory. "
"This sets the relative path for many features including roles/ group_vars/ etc.",
type=unfrack_path())
def add_check_options(parser):
"""Add options for commands which can run with diagnostic information of tasks"""
parser.add_argument("-C", "--check", default=False, dest='check', action='store_true',
help="don't make any changes; instead, try to predict some of the changes that may occur")
parser.add_argument('--syntax-check', dest='syntax', action='store_true',
help="perform a syntax check on the playbook, but do not execute it")
parser.add_argument("-D", "--diff", default=C.DIFF_ALWAYS, dest='diff', action='store_true',
help="when changing (small) files and templates, show the differences in those"
" files; works great with --check")
def add_connect_options(parser):
"""Add options for commands which need to connection to other hosts"""
connect_group = parser.add_argument_group("Connection Options", "control as whom and how to connect to hosts")
connect_group.add_argument('--private-key', '--key-file', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection', type=unfrack_path())
connect_group.add_argument('-u', '--user', default=C.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % C.DEFAULT_REMOTE_USER)
connect_group.add_argument('-c', '--connection', dest='connection', default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
connect_group.add_argument('-T', '--timeout', default=C.DEFAULT_TIMEOUT, type=int, dest='timeout',
help="override the connection timeout in seconds (default=%s)" % C.DEFAULT_TIMEOUT)
# ssh only
connect_group.add_argument('--ssh-common-args', default=None, dest='ssh_common_args',
help="specify common arguments to pass to sftp/scp/ssh (e.g. ProxyCommand)")
connect_group.add_argument('--sftp-extra-args', default=None, dest='sftp_extra_args',
help="specify extra arguments to pass to sftp only (e.g. -f, -l)")
connect_group.add_argument('--scp-extra-args', default=None, dest='scp_extra_args',
help="specify extra arguments to pass to scp only (e.g. -l)")
connect_group.add_argument('--ssh-extra-args', default=None, dest='ssh_extra_args',
help="specify extra arguments to pass to ssh only (e.g. -R)")
parser.add_argument_group(connect_group)
connect_password_group = parser.add_mutually_exclusive_group()
connect_password_group.add_argument('-k', '--ask-pass', default=C.DEFAULT_ASK_PASS, dest='ask_pass', action='store_true',
help='ask for connection password')
connect_password_group.add_argument('--connection-password-file', '--conn-pass-file', default=C.CONNECTION_PASSWORD_FILE, dest='connection_password_file',
help="Connection password file", type=unfrack_path(), action='store')
parser.add_argument_group(connect_password_group)
def add_fork_options(parser):
"""Add options for commands that can fork worker processes"""
parser.add_argument('-f', '--forks', dest='forks', default=C.DEFAULT_FORKS, type=int,
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
def add_inventory_options(parser):
"""Add options for commands that utilize inventory"""
parser.add_argument('-i', '--inventory', '--inventory-file', dest='inventory', action="append",
help="specify inventory host path or comma separated host list. --inventory-file is deprecated")
parser.add_argument('--list-hosts', dest='listhosts', action='store_true',
help='outputs a list of matching hosts; does not execute anything else')
parser.add_argument('-l', '--limit', default=C.DEFAULT_SUBSET, dest='subset',
help='further limit selected hosts to an additional pattern')
def add_meta_options(parser):
"""Add options for commands which can launch meta tasks from the command line"""
parser.add_argument('--force-handlers', default=C.DEFAULT_FORCE_HANDLERS, dest='force_handlers', action='store_true',
help="run handlers even if a task fails")
parser.add_argument('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache for every host in inventory")
def add_module_options(parser):
"""Add options for commands that load modules"""
module_path = C.config.get_configuration_definition('DEFAULT_MODULE_PATH').get('default', '')
parser.add_argument('-M', '--module-path', dest='module_path', default=None,
help="prepend colon-separated path(s) to module library (default=%s)" % module_path,
type=unfrack_path(pathsep=True), action=PrependListAction)
def add_output_options(parser):
"""Add options for commands which can change their output"""
parser.add_argument('-o', '--one-line', dest='one_line', action='store_true',
help='condense output')
parser.add_argument('-t', '--tree', dest='tree', default=None,
help='log output to this directory')
def add_runas_options(parser):
"""
Add options for commands which can run tasks as another user
Note that this includes the options from add_runas_prompt_options(). Only one of these
functions should be used.
"""
runas_group = parser.add_argument_group("Privilege Escalation Options", "control how and which user you become as on target hosts")
# consolidated privilege escalation (become)
runas_group.add_argument("-b", "--become", default=C.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (does not imply password prompting)")
runas_group.add_argument('--become-method', dest='become_method', default=C.DEFAULT_BECOME_METHOD,
help='privilege escalation method to use (default=%s)' % C.DEFAULT_BECOME_METHOD +
', use `ansible-doc -t become -l` to list valid choices.')
runas_group.add_argument('--become-user', default=None, dest='become_user', type=str,
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
parser.add_argument_group(runas_group)
add_runas_prompt_options(parser)
def add_runas_prompt_options(parser, runas_group=None):
"""
Add options for commands which need to prompt for privilege escalation credentials
Note that add_runas_options() includes these options already. Only one of the two functions
should be used.
"""
if runas_group is not None:
parser.add_argument_group(runas_group)
runas_pass_group = parser.add_mutually_exclusive_group()
runas_pass_group.add_argument('-K', '--ask-become-pass', dest='become_ask_pass', action='store_true',
default=C.DEFAULT_BECOME_ASK_PASS,
help='ask for privilege escalation password')
runas_pass_group.add_argument('--become-password-file', '--become-pass-file', default=C.BECOME_PASSWORD_FILE, dest='become_password_file',
help="Become password file", type=unfrack_path(), action='store')
parser.add_argument_group(runas_pass_group)
def add_runtask_options(parser):
"""Add options for commands that run a task"""
parser.add_argument('-e', '--extra-vars', dest="extra_vars", action="append", type=maybe_unfrack_path('@'),
help="set additional variables as key=value or YAML/JSON, if filename prepend with @", default=[])
def add_tasknoplay_options(parser):
"""Add options for commands that run a task w/o a defined play"""
parser.add_argument('--task-timeout', type=int, dest="task_timeout", action="store", default=C.TASK_TIMEOUT,
help="set task timeout limit in seconds, must be positive integer.")
def add_subset_options(parser):
"""Add options for commands which can run a subset of tasks"""
parser.add_argument('-t', '--tags', dest='tags', default=C.TAGS_RUN, action='append',
help="only run plays and tasks tagged with these values")
parser.add_argument('--skip-tags', dest='skip_tags', default=C.TAGS_SKIP, action='append',
help="only run plays and tasks whose tags do not match these values")
def add_vault_options(parser):
"""Add options for loading vault files"""
parser.add_argument('--vault-id', default=[], dest='vault_ids', action='append', type=str,
help='the vault identity to use')
base_group = parser.add_mutually_exclusive_group()
base_group.add_argument('--ask-vault-password', '--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
base_group.add_argument('--vault-password-file', '--vault-pass-file', default=[], dest='vault_password_files',
help="vault password file", type=unfrack_path(follow=False), action='append')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,506 |
Ansible 2.14.4 unexpectedly executes shell command in when using --syntax-check
|
### Summary
I suspect a regression in 2.14.4 because in our project, ansible-lint github action suddenly fails this week.
Digging further:
`ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup`
now fails with an error generated by dsctl command because 389ds 'localhost' instance does not exists
In other word it is the output of a command run by ansible.builtin.shell while ansible --help says that commands should not be executed.
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/progier/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/progier/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/progier/.ansible/collections:/usr/share/ansible/collections
executable location = /home/progier/.local/bin/ansible
python version = 3.10.9 (main, Dec 7 2022, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
$ansible-lint 6.14.5 using ansible 2.14.4
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Fedora release 36 (Thirty Six)
### Steps to Reproduce
git clone https://github.com/389ds/ansible-ds
cd ansible_ds
ansible localhost --syntax-check --module-name=include_role --args name=./ansible_collections/ds389/ansible_ds/playbooks/roles/ds389_backup
### Expected Results
Either no error or an error that is not a shell command error message.
FYI: The error message and return code value is clearly generated by 389ds administration CLI
which should not be executed in --syntax-check mode:
```
dsctl localhost status ; echo return code is: $?
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)
return code is: 1
```
### Actual Results
```console
a quite long list of 'localhost | SKIPPED' and 'localhost | SUCCESS => {' then:
localhost | FAILED | rc=1 >>
No such instance 'localhost'
Unable to access instance information. Are you running as the correct user? (usually dirsrv or root)non-zero return code
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80506
|
https://github.com/ansible/ansible/pull/80507
|
362c949622b637fb1a5e80b1b0bf780c1ac7e3b8
|
f3774ae7d4d5ca2c2d6b58adc4f2e03b724d2a6c
| 2023-04-12T17:59:30Z |
python
| 2023-04-12T19:24:34Z |
lib/ansible/cli/playbook.py
|
#!/usr/bin/env python
# (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import os
import stat
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.module_utils._text import to_bytes
from ansible.playbook.block import Block
from ansible.plugins.loader import add_all_plugin_dirs
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.utils.display import Display
display = Display()
class PlaybookCLI(CLI):
''' the tool to run *Ansible playbooks*, which are a configuration and multinode deployment system.
See the project home page (https://docs.ansible.com) for more information. '''
name = 'ansible-playbook'
def init_parser(self):
# create parser for CLI options
super(PlaybookCLI, self).init_parser(
usage="%prog [options] playbook.yml [playbook2 ...]",
desc="Runs Ansible playbooks, executing the defined tasks on the targeted hosts.")
opt_help.add_connect_options(self.parser)
opt_help.add_meta_options(self.parser)
opt_help.add_runas_options(self.parser)
opt_help.add_subset_options(self.parser)
opt_help.add_check_options(self.parser)
opt_help.add_inventory_options(self.parser)
opt_help.add_runtask_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_fork_options(self.parser)
opt_help.add_module_options(self.parser)
# ansible playbook specific opts
self.parser.add_argument('--list-tasks', dest='listtasks', action='store_true',
help="list all tasks that would be executed")
self.parser.add_argument('--list-tags', dest='listtags', action='store_true',
help="list all available tags")
self.parser.add_argument('--step', dest='step', action='store_true',
help="one-step-at-a-time: confirm each task before running")
self.parser.add_argument('--start-at-task', dest='start_at_task',
help="start the playbook at the task matching this name")
self.parser.add_argument('args', help='Playbook(s)', metavar='playbook', nargs='+')
def post_process_args(self, options):
# for listing, we need to know if user had tag input
# capture here as parent function sets defaults for tags
havetags = bool(options.tags or options.skip_tags)
options = super(PlaybookCLI, self).post_process_args(options)
if options.listtags:
# default to all tags (including never), when listing tags
# unless user specified tags
if not havetags:
options.tags = ['never', 'all']
display.verbosity = options.verbosity
self.validate_conflicts(options, runas_opts=True, fork_opts=True)
return options
def run(self):
super(PlaybookCLI, self).run()
# Note: slightly wrong, this is written so that implicit localhost
# manages passwords
sshpass = None
becomepass = None
passwords = {}
# initial error check, to make sure all specified playbooks are accessible
# before we start running anything through the playbook executor
# also prep plugin paths
b_playbook_dirs = []
for playbook in context.CLIARGS['args']:
# resolve if it is collection playbook with FQCN notation, if not, leaves unchanged
resource = _get_collection_playbook_path(playbook)
if resource is not None:
playbook_collection = resource[2]
else:
# not an FQCN so must be a file
if not os.path.exists(playbook):
raise AnsibleError("the playbook: %s could not be found" % playbook)
if not (os.path.isfile(playbook) or stat.S_ISFIFO(os.stat(playbook).st_mode)):
raise AnsibleError("the playbook: %s does not appear to be a file" % playbook)
# check if playbook is from collection (path can be passed directly)
playbook_collection = _get_collection_name_from_path(playbook)
# don't add collection playbooks to adjacency search path
if not playbook_collection:
# setup dirs to enable loading plugins from all playbooks in case they add callbacks/inventory/etc
b_playbook_dir = os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict')))
add_all_plugin_dirs(b_playbook_dir)
b_playbook_dirs.append(b_playbook_dir)
if b_playbook_dirs:
# allow collections adjacent to these playbooks
# we use list copy to avoid opening up 'adjacency' in the previous loop
AnsibleCollectionConfig.playbook_paths = b_playbook_dirs
# don't deal with privilege escalation or passwords when we don't need to
if not (context.CLIARGS['listhosts'] or context.CLIARGS['listtasks'] or
context.CLIARGS['listtags'] or context.CLIARGS['syntax']):
(sshpass, becomepass) = self.ask_passwords()
passwords = {'conn_pass': sshpass, 'become_pass': becomepass}
# create base objects
loader, inventory, variable_manager = self._play_prereqs()
# (which is not returned in list_hosts()) is taken into account for
# warning if inventory is empty. But it can't be taken into account for
# checking if limit doesn't match any hosts. Instead we don't worry about
# limit if only implicit localhost was in inventory to start with.
#
# Fix this when we rewrite inventory by making localhost a real host (and thus show up in list_hosts())
CLI.get_host_list(inventory, context.CLIARGS['subset'])
# flush fact cache if requested
if context.CLIARGS['flush_cache']:
self._flush_cache(inventory, variable_manager)
# create the playbook executor, which manages running the plays via a task queue manager
pbex = PlaybookExecutor(playbooks=context.CLIARGS['args'], inventory=inventory,
variable_manager=variable_manager, loader=loader,
passwords=passwords)
results = pbex.run()
if isinstance(results, list):
for p in results:
display.display('\nplaybook: %s' % p['playbook'])
for idx, play in enumerate(p['plays']):
if play._included_path is not None:
loader.set_basedir(play._included_path)
else:
pb_dir = os.path.realpath(os.path.dirname(p['playbook']))
loader.set_basedir(pb_dir)
# show host list if we were able to template into a list
try:
host_list = ','.join(play.hosts)
except TypeError:
host_list = ''
msg = "\n play #%d (%s): %s" % (idx + 1, host_list, play.name)
mytags = set(play.tags)
msg += '\tTAGS: [%s]' % (','.join(mytags))
if context.CLIARGS['listhosts']:
playhosts = set(inventory.get_hosts(play.hosts))
msg += "\n pattern: %s\n hosts (%d):" % (play.hosts, len(playhosts))
for host in playhosts:
msg += "\n %s" % host
display.display(msg)
all_tags = set()
if context.CLIARGS['listtags'] or context.CLIARGS['listtasks']:
taskmsg = ''
if context.CLIARGS['listtasks']:
taskmsg = ' tasks:\n'
def _process_block(b):
taskmsg = ''
for task in b.block:
if isinstance(task, Block):
taskmsg += _process_block(task)
else:
if task.action in C._ACTION_META and task.implicit:
continue
all_tags.update(task.tags)
if context.CLIARGS['listtasks']:
cur_tags = list(mytags.union(set(task.tags)))
cur_tags.sort()
if task.name:
taskmsg += " %s" % task.get_name()
else:
taskmsg += " %s" % task.action
taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags)
return taskmsg
all_vars = variable_manager.get_vars(play=play)
for block in play.compile():
block = block.filter_tagged_tasks(all_vars)
if not block.has_tasks():
continue
taskmsg += _process_block(block)
if context.CLIARGS['listtags']:
cur_tags = list(mytags.union(all_tags))
cur_tags.sort()
taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags)
display.display(taskmsg)
return 0
else:
return results
@staticmethod
def _flush_cache(inventory, variable_manager):
for host in inventory.list_hosts():
hostname = host.get_name()
variable_manager.clear_facts(hostname)
def main(args=None):
PlaybookCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,922 |
include_vars.py - Add support for symbolic links when passing "dir"
|
### Summary
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/include_vars.py
Currently when using the following symbolic links are ignored
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
```
Would be nice if the walk here could be a conditional passed from arguments to either follow symlinks or not (line 185)
```
sorted_walk = list(walk(self.source_dir, onerror=self._log_walk))
```
Currently we're having to maintain our own version of include_vars to support.
Thanks.
### Issue Type
Feature Idea
### Component Name
include_vars
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
follow_symlinks: True
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79922
|
https://github.com/ansible/ansible/pull/80460
|
bd6feeb6e7b334d5da572cbb5add7594be7fc61e
|
2e62724a8a8f801af35943d266dd906e029e20d6
| 2023-02-04T12:41:45Z |
python
| 2023-04-14T12:07:08Z |
changelogs/fragments/80460-add-symbolic-links-with-dir.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,922 |
include_vars.py - Add support for symbolic links when passing "dir"
|
### Summary
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/include_vars.py
Currently when using the following symbolic links are ignored
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
```
Would be nice if the walk here could be a conditional passed from arguments to either follow symlinks or not (line 185)
```
sorted_walk = list(walk(self.source_dir, onerror=self._log_walk))
```
Currently we're having to maintain our own version of include_vars to support.
Thanks.
### Issue Type
Feature Idea
### Component Name
include_vars
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
follow_symlinks: True
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79922
|
https://github.com/ansible/ansible/pull/80460
|
bd6feeb6e7b334d5da572cbb5add7594be7fc61e
|
2e62724a8a8f801af35943d266dd906e029e20d6
| 2023-02-04T12:41:45Z |
python
| 2023-04-14T12:07:08Z |
lib/ansible/plugins/action/include_vars.py
|
# Copyright: (c) 2016, Allen Sanabria <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from os import path, walk
import re
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native, to_text
from ansible.plugins.action import ActionBase
from ansible.utils.vars import combine_vars
class ActionModule(ActionBase):
TRANSFERS_FILES = False
VALID_FILE_EXTENSIONS = ['yaml', 'yml', 'json']
VALID_DIR_ARGUMENTS = ['dir', 'depth', 'files_matching', 'ignore_files', 'extensions', 'ignore_unknown_extensions']
VALID_FILE_ARGUMENTS = ['file', '_raw_params']
VALID_ALL = ['name', 'hash_behaviour']
def _set_dir_defaults(self):
if not self.depth:
self.depth = 0
if self.files_matching:
self.matcher = re.compile(r'{0}'.format(self.files_matching))
else:
self.matcher = None
if not self.ignore_files:
self.ignore_files = list()
if isinstance(self.ignore_files, string_types):
self.ignore_files = self.ignore_files.split()
elif isinstance(self.ignore_files, dict):
return {
'failed': True,
'message': '{0} must be a list'.format(self.ignore_files)
}
def _set_args(self):
""" Set instance variables based on the arguments that were passed """
self.hash_behaviour = self._task.args.get('hash_behaviour', None)
self.return_results_as_name = self._task.args.get('name', None)
self.source_dir = self._task.args.get('dir', None)
self.source_file = self._task.args.get('file', None)
if not self.source_dir and not self.source_file:
self.source_file = self._task.args.get('_raw_params')
if self.source_file:
self.source_file = self.source_file.rstrip('\n')
self.depth = self._task.args.get('depth', None)
self.files_matching = self._task.args.get('files_matching', None)
self.ignore_unknown_extensions = self._task.args.get('ignore_unknown_extensions', False)
self.ignore_files = self._task.args.get('ignore_files', None)
self.valid_extensions = self._task.args.get('extensions', self.VALID_FILE_EXTENSIONS)
# convert/validate extensions list
if isinstance(self.valid_extensions, string_types):
self.valid_extensions = list(self.valid_extensions)
if not isinstance(self.valid_extensions, list):
raise AnsibleError('Invalid type for "extensions" option, it must be a list')
def run(self, tmp=None, task_vars=None):
""" Load yml files recursively from a directory.
"""
del tmp # tmp no longer has any effect
if task_vars is None:
task_vars = dict()
self.show_content = True
self.included_files = []
# Validate arguments
dirs = 0
files = 0
for arg in self._task.args:
if arg in self.VALID_DIR_ARGUMENTS:
dirs += 1
elif arg in self.VALID_FILE_ARGUMENTS:
files += 1
elif arg in self.VALID_ALL:
pass
else:
raise AnsibleError('{0} is not a valid option in include_vars'.format(to_native(arg)))
if dirs and files:
raise AnsibleError("You are mixing file only and dir only arguments, these are incompatible")
# set internal vars from args
self._set_args()
results = dict()
failed = False
if self.source_dir:
self._set_dir_defaults()
self._set_root_dir()
if not path.exists(self.source_dir):
failed = True
err_msg = ('{0} directory does not exist'.format(to_native(self.source_dir)))
elif not path.isdir(self.source_dir):
failed = True
err_msg = ('{0} is not a directory'.format(to_native(self.source_dir)))
else:
for root_dir, filenames in self._traverse_dir_depth():
failed, err_msg, updated_results = (self._load_files_in_dir(root_dir, filenames))
if failed:
break
results.update(updated_results)
else:
try:
self.source_file = self._find_needle('vars', self.source_file)
failed, err_msg, updated_results = (
self._load_files(self.source_file)
)
if not failed:
results.update(updated_results)
except AnsibleError as e:
failed = True
err_msg = to_native(e)
if self.return_results_as_name:
scope = dict()
scope[self.return_results_as_name] = results
results = scope
result = super(ActionModule, self).run(task_vars=task_vars)
if failed:
result['failed'] = failed
result['message'] = err_msg
elif self.hash_behaviour is not None and self.hash_behaviour != C.DEFAULT_HASH_BEHAVIOUR:
merge_hashes = self.hash_behaviour == 'merge'
for key, value in results.items():
old_value = task_vars.get(key, None)
results[key] = combine_vars(old_value, value, merge=merge_hashes)
result['ansible_included_var_files'] = self.included_files
result['ansible_facts'] = results
result['_ansible_no_log'] = not self.show_content
return result
def _set_root_dir(self):
if self._task._role:
if self.source_dir.split('/')[0] == 'vars':
path_to_use = (
path.join(self._task._role._role_path, self.source_dir)
)
if path.exists(path_to_use):
self.source_dir = path_to_use
else:
path_to_use = (
path.join(
self._task._role._role_path, 'vars', self.source_dir
)
)
self.source_dir = path_to_use
else:
if hasattr(self._task._ds, '_data_source'):
current_dir = (
"/".join(self._task._ds._data_source.split('/')[:-1])
)
self.source_dir = path.join(current_dir, self.source_dir)
def _log_walk(self, error):
self._display.vvv('Issue with walking through "%s": %s' % (to_native(error.filename), to_native(error)))
def _traverse_dir_depth(self):
""" Recursively iterate over a directory and sort the files in
alphabetical order. Do not iterate pass the set depth.
The default depth is unlimited.
"""
current_depth = 0
sorted_walk = list(walk(self.source_dir, onerror=self._log_walk))
sorted_walk.sort(key=lambda x: x[0])
for current_root, current_dir, current_files in sorted_walk:
current_depth += 1
if current_depth <= self.depth or self.depth == 0:
current_files.sort()
yield (current_root, current_files)
else:
break
def _ignore_file(self, filename):
""" Return True if a file matches the list of ignore_files.
Args:
filename (str): The filename that is being matched against.
Returns:
Boolean
"""
for file_type in self.ignore_files:
try:
if re.search(r'{0}$'.format(file_type), filename):
return True
except Exception:
err_msg = 'Invalid regular expression: {0}'.format(file_type)
raise AnsibleError(err_msg)
return False
def _is_valid_file_ext(self, source_file):
""" Verify if source file has a valid extension
Args:
source_file (str): The full path of source file or source file.
Returns:
Bool
"""
file_ext = path.splitext(source_file)
return bool(len(file_ext) > 1 and file_ext[-1][1:] in self.valid_extensions)
def _load_files(self, filename, validate_extensions=False):
""" Loads a file and converts the output into a valid Python dict.
Args:
filename (str): The source file.
Returns:
Tuple (bool, str, dict)
"""
results = dict()
failed = False
err_msg = ''
if validate_extensions and not self._is_valid_file_ext(filename):
failed = True
err_msg = ('{0} does not have a valid extension: {1}'.format(to_native(filename), ', '.join(self.valid_extensions)))
else:
b_data, show_content = self._loader._get_file_contents(filename)
data = to_text(b_data, errors='surrogate_or_strict')
self.show_content = show_content
data = self._loader.load(data, file_name=filename, show_content=show_content)
if not data:
data = dict()
if not isinstance(data, dict):
failed = True
err_msg = ('{0} must be stored as a dictionary/hash'.format(to_native(filename)))
else:
self.included_files.append(filename)
results.update(data)
return failed, err_msg, results
def _load_files_in_dir(self, root_dir, var_files):
""" Load the found yml files and update/overwrite the dictionary.
Args:
root_dir (str): The base directory of the list of files that is being passed.
var_files: (list): List of files to iterate over and load into a dictionary.
Returns:
Tuple (bool, str, dict)
"""
results = dict()
failed = False
err_msg = ''
for filename in var_files:
stop_iter = False
# Never include main.yml from a role, as that is the default included by the role
if self._task._role:
if path.join(self._task._role._role_path, filename) == path.join(root_dir, 'vars', 'main.yml'):
stop_iter = True
continue
filepath = path.join(root_dir, filename)
if self.files_matching:
if not self.matcher.search(filename):
stop_iter = True
if not stop_iter and not failed:
if self.ignore_unknown_extensions:
if path.exists(filepath) and not self._ignore_file(filename) and self._is_valid_file_ext(filename):
failed, err_msg, loaded_data = self._load_files(filepath, validate_extensions=True)
if not failed:
results.update(loaded_data)
else:
if path.exists(filepath) and not self._ignore_file(filename):
failed, err_msg, loaded_data = self._load_files(filepath, validate_extensions=True)
if not failed:
results.update(loaded_data)
return failed, err_msg, results
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,922 |
include_vars.py - Add support for symbolic links when passing "dir"
|
### Summary
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/include_vars.py
Currently when using the following symbolic links are ignored
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
```
Would be nice if the walk here could be a conditional passed from arguments to either follow symlinks or not (line 185)
```
sorted_walk = list(walk(self.source_dir, onerror=self._log_walk))
```
Currently we're having to maintain our own version of include_vars to support.
Thanks.
### Issue Type
Feature Idea
### Component Name
include_vars
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
- name:
include_vars_vault:
dir: "{{ mypath }}"
follow_symlinks: True
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79922
|
https://github.com/ansible/ansible/pull/80460
|
bd6feeb6e7b334d5da572cbb5add7594be7fc61e
|
2e62724a8a8f801af35943d266dd906e029e20d6
| 2023-02-04T12:41:45Z |
python
| 2023-04-14T12:07:08Z |
test/integration/targets/include_vars/tasks/main.yml
|
---
- name: verify that the default value is indeed 1
assert:
that:
- "testing == 1"
- "base_dir == 'defaults'"
- name: include the vars/environments/development/all.yml
include_vars:
file: environments/development/all.yml
register: included_one_file
- name: verify that the correct file has been loaded and default value is indeed 789
assert:
that:
- "testing == 789"
- "base_dir == 'environments/development'"
- "{{ included_one_file.ansible_included_var_filesΒ | length }} == 1"
- "'vars/environments/development/all.yml' in included_one_file.ansible_included_var_files[0]"
- name: include the vars/environments/development/all.yml and save results in all
include_vars:
file: environments/development/all.yml
name: all
- name: verify that the values are stored in the all variable
assert:
that:
- "all['testing'] == 789"
- "all['base_dir'] == 'environments/development'"
- name: include the all directory in vars
include_vars:
dir: all
depth: 1
- name: verify that the default value is indeed 123
assert:
that:
- "testing == 123"
- "base_dir == 'all'"
- name: include var files with extension only
include_vars:
dir: webapp
ignore_unknown_extensions: True
extensions: ['', 'yaml', 'yml', 'json']
register: include_without_file_extension
- name: verify that only files with valid extensions are loaded
assert:
that:
- webapp_version is defined
- "'file_without_extension' in '{{ include_without_file_extension.ansible_included_var_files | join(' ') }}'"
- name: include every directory in vars
include_vars:
dir: vars
extensions: ['', 'yaml', 'yml', 'json']
ignore_files:
- no_auto_unsafe.yml
register: include_every_dir
- name: verify that the correct files have been loaded and overwrite based on alphabetical order
assert:
that:
- "testing == 456"
- "base_dir == 'services'"
- "webapp_containers == 10"
- "{{ include_every_dir.ansible_included_var_filesΒ | length }} == 7"
- "'vars/all/all.yml' in include_every_dir.ansible_included_var_files[0]"
- "'vars/environments/development/all.yml' in include_every_dir.ansible_included_var_files[1]"
- "'vars/environments/development/services/webapp.yml' in include_every_dir.ansible_included_var_files[2]"
- "'vars/services/webapp.yml' in include_every_dir.ansible_included_var_files[5]"
- "'vars/webapp/file_without_extension' in include_every_dir.ansible_included_var_files[6]"
- name: include every directory in vars except files matching webapp.yml
include_vars:
dir: vars
ignore_files:
- webapp.yml
- file_without_extension
- no_auto_unsafe.yml
register: include_without_webapp
- name: verify that the webapp.yml file was not included
assert:
that:
- "testing == 789"
- "base_dir == 'environments/development'"
- "{{ include_without_webapp.ansible_included_var_filesΒ | length }} == 4"
- "'webapp.yml' not in '{{ include_without_webapp.ansible_included_var_files | join(' ') }}'"
- "'file_without_extension' not in '{{ include_without_webapp.ansible_included_var_files | join(' ') }}'"
- name: include only files matching webapp.yml
include_vars:
dir: environments
files_matching: webapp.yml
register: include_match_webapp
- name: verify that only files matching webapp.yml and in the environments directory get loaded.
assert:
that:
- "testing == 101112"
- "base_dir == 'development/services'"
- "webapp_containers == 20"
- "{{ include_match_webapp.ansible_included_var_filesΒ | length }} == 1"
- "'vars/environments/development/services/webapp.yml' in include_match_webapp.ansible_included_var_files[0]"
- "'all.yml' not in '{{ include_match_webapp.ansible_included_var_files | join(' ') }}'"
- name: include only files matching webapp.yml and store results in webapp
include_vars:
dir: environments
files_matching: webapp.yml
name: webapp
- name: verify that only files matching webapp.yml and in the environments directory get loaded into stored variable webapp.
assert:
that:
- "webapp['testing'] == 101112"
- "webapp['base_dir'] == 'development/services'"
- "webapp['webapp_containers'] == 20"
- name: include var files without extension
include_vars:
dir: webapp
ignore_unknown_extensions: False
register: include_with_unknown_file_extension
ignore_errors: True
- name: verify that files without valid extensions are loaded
assert:
that:
- "'a valid extension' in include_with_unknown_file_extension.message"
- name: include var with raw params
include_vars: >
services/service_vars.yml
- name: Verify that files with raw params is include without new line character
assert:
that:
- "service_name == 'my_custom_service'"
- name: Check NoneType for raw params and file
include_vars:
file: "{{ lookup('first_found', possible_files, errors='ignore') }}"
vars:
possible_files:
- "does_not_exist.yml"
ignore_errors: True
register: include_with_non_existent_file
- name: Verify that file and raw_params provide correct error message to user
assert:
that:
- "'Could not find file' in include_with_non_existent_file.message"
- name: include var (FQCN) with raw params
ansible.builtin.include_vars: >
services/service_vars_fqcn.yml
- name: Verify that FQCN of include_vars works
assert:
that:
- "'my_custom_service' == service_name_fqcn"
- "'my_custom_service' == service_name_tmpl_fqcn"
- name: Include a vars file with a hash variable
include_vars:
file: vars2/hashes/hash1.yml
- name: Verify the hash variable
assert:
that:
- "{{ config | length }} == 3"
- "config.key0 == 0"
- "config.key1 == 0"
- "{{ config.key2 | length }} == 1"
- "config.key2.a == 21"
- name: Include the second file to merge the hash variable
include_vars:
file: vars2/hashes/hash2.yml
hash_behaviour: merge
- name: Verify that the hash is merged
assert:
that:
- "{{ config | length }} == 4"
- "config.key0 == 0"
- "config.key1 == 1"
- "{{ config.key2 | length }} == 2"
- "config.key2.a == 21"
- "config.key2.b == 22"
- "config.key3 == 3"
- name: Include the second file again without hash_behaviour option
include_vars:
file: vars2/hashes/hash2.yml
- name: Verify that the properties from the first file is cleared
assert:
that:
- "{{ config | length }} == 3"
- "config.key1 == 1"
- "{{ config.key2 | length }} == 1"
- "config.key2.b == 22"
- "config.key3 == 3"
- name: Include a vars dir with hash variables
include_vars:
dir: "{{ role_path }}/vars2/hashes/"
hash_behaviour: merge
- name: Verify that the hash is merged after vars files are accumulated
assert:
that:
- "{{ config | length }} == 3"
- "config.key0 is undefined"
- "config.key1 == 1"
- "{{ config.key2 | length }} == 1"
- "config.key2.b == 22"
- "config.key3 == 3"
- include_vars:
file: no_auto_unsafe.yml
register: baz
- assert:
that:
- baz.ansible_facts.foo|type_debug != "AnsibleUnsafeText"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,520 |
ansible.builtin.hostname does not update current hostname on OpenBSD
|
### Summary
Using the `ansible.builtin.hostname` module with an OpenBSD only updates the permanent hostname, not the current hostname. In my opinion, it should update both, similar to behaviour on other platforms.
Looking at the code, I suspect the `OpenBSDStrategy` class in `modules/hostname.py` simply does not implement the `hostname` command. Adding such would fix the issue.
### Issue Type
Bug Report
### Component Name
hostname
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/Users/rk/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/rk/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_PIPELINING(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra']
CONFIG_FILE() = /Users/rk/stack/vnode/vnode-infra/ansible.cfg
DEFAULT_HOST_LIST(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra/inventory']
DEFAULT_MANAGED_STR(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = This file is managed via Ansible.%n
Any manual changes will be overwritten.
DEFAULT_ROLES_PATH(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra/roles']
CONNECTION:
==========
local:
_____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
psrp:
____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
ssh:
___
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
winrm:
_____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
```
### OS / Environment
Ansible host: MacOS Ventura 13.3.1 (Macbook Pro M1 Max)
Target host: OpenBSD 7.2, python 3.9
### Steps to Reproduce
On the target host, starting with a non-conforming hostname.
```bash
$ cat /etc/myname
nloih0-test
$ hostname
nloih0-test
```
Using the following playbook:
```yaml
---
- name: Hostname is up-to-date
hosts: nloih0.vnode.net
become: True
become_method: doas
tasks:
- name: Hostname is up-to-date
ansible.builtin.hostname:
name: "{{ inventory_hostname }}"
```
```bash
$ ansible-playbook -v ./test.yml
```
### Expected Results
Post-run, I'd expect both the permanent and current hostname to be updated.
```bash
$ cat /etc/myname
nloih0.vnode.net
$ hostname
nloih0.vnode.net
```
### Actual Results
Results in following system state after ansible run:
```console
$ cat /etc/myname
nloih0.vnode.net
$ hostname
nloih0-test
```
Despite the `ansible.builtint.hostname` module reporting that it made changes:
```
changed: [nloih0.vnode.net] => {"ansible_facts": {"ansible_domain": "", "ansible_fqdn": "nloih0-test", "ansible_hostname": "nloih0", "ansible_nodename": "nloih0.vnode.net"}, "changed": true, "name": "nloih0.vnode.net"}
```
For completeness, I included the run output below. Initially, I put in the output for '-vvvv' but that seemed somewhat excessive. If you'd like further detail, please let me know.
```
% ansible-playbook -vv ./test.yml
ansible-playbook [core 2.14.4]
config file = /Users/rk/stack/vnode/vnode-infra/ansible.cfg
configured module search path = ['/Users/rk/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/rk/stack/vnode/vnode-infra
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/rk/stack/vnode/vnode-infra/ansible.cfg as config file
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml ******************************************************************************************************************
1 plays in ./test.yml
PLAY [Hostname is up-to-date] *******************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************
task path: /Users/rk/stack/vnode/vnode-infra/test.yml:2
redirecting (type: become) ansible.builtin.doas to community.general.doas
[WARNING]: Platform openbsd on host nloih0.vnode.net is using the discovered Python interpreter at /usr/local/bin/python3.9, but
future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.14/reference_appendices/interpreter_discovery.html for more information.
ok: [nloih0.vnode.net]
TASK [Hostname is up-to-date] *******************************************************************************************************
task path: /Users/rk/stack/vnode/vnode-infra/test.yml:7
redirecting (type: become) ansible.builtin.doas to community.general.doas
changed: [nloih0.vnode.net] => {"ansible_facts": {"ansible_domain": "", "ansible_fqdn": "nloih0-test", "ansible_hostname": "nloih0", "ansible_nodename": "nloih0.vnode.net"}, "changed": true, "name": "nloih0.vnode.net"}
PLAY RECAP **************************************************************************************************************************
nloih0.vnode.net : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80520
|
https://github.com/ansible/ansible/pull/80521
|
2e62724a8a8f801af35943d266dd906e029e20d6
|
6aac0e2460985daac132541f643cf1256430e572
| 2023-04-13T21:16:25Z |
python
| 2023-04-14T14:41:44Z |
changelogs/fragments/80520-fix-current-hostname-openbsd.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,520 |
ansible.builtin.hostname does not update current hostname on OpenBSD
|
### Summary
Using the `ansible.builtin.hostname` module with an OpenBSD only updates the permanent hostname, not the current hostname. In my opinion, it should update both, similar to behaviour on other platforms.
Looking at the code, I suspect the `OpenBSDStrategy` class in `modules/hostname.py` simply does not implement the `hostname` command. Adding such would fix the issue.
### Issue Type
Bug Report
### Component Name
hostname
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = None
configured module search path = ['/Users/rk/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/rk/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_PIPELINING(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
COLLECTIONS_PATHS(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra']
CONFIG_FILE() = /Users/rk/stack/vnode/vnode-infra/ansible.cfg
DEFAULT_HOST_LIST(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra/inventory']
DEFAULT_MANAGED_STR(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = This file is managed via Ansible.%n
Any manual changes will be overwritten.
DEFAULT_ROLES_PATH(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = ['/Users/rk/stack/vnode/vnode-infra/roles']
CONNECTION:
==========
local:
_____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
psrp:
____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
ssh:
___
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
winrm:
_____
pipelining(/Users/rk/stack/vnode/vnode-infra/ansible.cfg) = True
```
### OS / Environment
Ansible host: MacOS Ventura 13.3.1 (Macbook Pro M1 Max)
Target host: OpenBSD 7.2, python 3.9
### Steps to Reproduce
On the target host, starting with a non-conforming hostname.
```bash
$ cat /etc/myname
nloih0-test
$ hostname
nloih0-test
```
Using the following playbook:
```yaml
---
- name: Hostname is up-to-date
hosts: nloih0.vnode.net
become: True
become_method: doas
tasks:
- name: Hostname is up-to-date
ansible.builtin.hostname:
name: "{{ inventory_hostname }}"
```
```bash
$ ansible-playbook -v ./test.yml
```
### Expected Results
Post-run, I'd expect both the permanent and current hostname to be updated.
```bash
$ cat /etc/myname
nloih0.vnode.net
$ hostname
nloih0.vnode.net
```
### Actual Results
Results in following system state after ansible run:
```console
$ cat /etc/myname
nloih0.vnode.net
$ hostname
nloih0-test
```
Despite the `ansible.builtint.hostname` module reporting that it made changes:
```
changed: [nloih0.vnode.net] => {"ansible_facts": {"ansible_domain": "", "ansible_fqdn": "nloih0-test", "ansible_hostname": "nloih0", "ansible_nodename": "nloih0.vnode.net"}, "changed": true, "name": "nloih0.vnode.net"}
```
For completeness, I included the run output below. Initially, I put in the output for '-vvvv' but that seemed somewhat excessive. If you'd like further detail, please let me know.
```
% ansible-playbook -vv ./test.yml
ansible-playbook [core 2.14.4]
config file = /Users/rk/stack/vnode/vnode-infra/ansible.cfg
configured module search path = ['/Users/rk/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/rk/stack/vnode/vnode-infra
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/rk/stack/vnode/vnode-infra/ansible.cfg as config file
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: test.yml ******************************************************************************************************************
1 plays in ./test.yml
PLAY [Hostname is up-to-date] *******************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************
task path: /Users/rk/stack/vnode/vnode-infra/test.yml:2
redirecting (type: become) ansible.builtin.doas to community.general.doas
[WARNING]: Platform openbsd on host nloih0.vnode.net is using the discovered Python interpreter at /usr/local/bin/python3.9, but
future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-
core/2.14/reference_appendices/interpreter_discovery.html for more information.
ok: [nloih0.vnode.net]
TASK [Hostname is up-to-date] *******************************************************************************************************
task path: /Users/rk/stack/vnode/vnode-infra/test.yml:7
redirecting (type: become) ansible.builtin.doas to community.general.doas
changed: [nloih0.vnode.net] => {"ansible_facts": {"ansible_domain": "", "ansible_fqdn": "nloih0-test", "ansible_hostname": "nloih0", "ansible_nodename": "nloih0.vnode.net"}, "changed": true, "name": "nloih0.vnode.net"}
PLAY RECAP **************************************************************************************************************************
nloih0.vnode.net : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80520
|
https://github.com/ansible/ansible/pull/80521
|
2e62724a8a8f801af35943d266dd906e029e20d6
|
6aac0e2460985daac132541f643cf1256430e572
| 2023-04-13T21:16:25Z |
python
| 2023-04-14T14:41:44Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules such as M(ansible.builtin.template)
or M(ansible.builtin.replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
- Note that 'systemd' should be specified for RHEL/EL/CentOS 7+. Older distributions should use 'redhat'.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.facts
attributes:
check_mode:
support: full
diff_mode:
support: full
facts:
support: full
platform:
platforms: posix
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
use: systemd
'''
import os
import platform
import socket
import traceback
import ansible.module_utils.compat.typing as t
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.utils import get_file_lines, get_file_content
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Systemd',
'freebsd': 'FreeBSD',
'generic': 'Base',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class BaseStrategy(object):
def __init__(self, module):
self.module = module
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
return self.get_permanent_hostname()
def set_current_hostname(self, name):
pass
def get_permanent_hostname(self):
raise NotImplementedError
def set_permanent_hostname(self, name):
raise NotImplementedError
class UnimplementedStrategy(BaseStrategy):
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class CommandStrategy(BaseStrategy):
COMMAND = 'hostname'
def __init__(self, module):
super(CommandStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class FileStrategy(BaseStrategy):
FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
return get_file_content(self.FILE, default='', strip=True)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
with open(self.FILE, 'w+') as f:
f.write("%s\n" % name)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class SLESStrategy(FileStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
FILE = '/etc/HOSTNAME'
class RedHatStrategy(BaseStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
for line in get_file_lines(self.NETWORK_FILE):
line = to_native(line).strip()
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
self.module.fail_json(
"Unable to locate HOSTNAME entry in %s" % self.NETWORK_FILE
)
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
content = get_file_content(self.NETWORK_FILE, strip=False) or ""
for line in content.splitlines(True):
line = to_native(line)
if line.strip().startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
if not found:
lines.append("HOSTNAME=%s\n" % name)
with open(self.NETWORK_FILE, 'w+') as f:
f.writelines(lines)
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class AlpineStrategy(FileStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
FILE = '/etc/hostname'
COMMAND = 'hostname'
def set_current_hostname(self, name):
super(AlpineStrategy, self).set_current_hostname(name)
hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
cmd = [hostname_cmd, '-F', self.FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(BaseStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
COMMAND = "hostnamectl"
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostnamectl_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostnamectl_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostnamectl_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostnamectl_cmd, '--pretty', '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def update_current_and_permanent_hostname(self):
# Must set the permanent hostname prior to current to avoid NetworkManager complaints
# about setting the hostname outside of NetworkManager
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
class OpenRCStrategy(BaseStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class OpenBSDStrategy(FileStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
FILE = '/etc/myname'
class SolarisStrategy(BaseStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
COMMAND = "hostname"
def __init__(self, module):
super(SolarisStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(BaseStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
FILE = '/etc/rc.conf.d/hostname'
COMMAND = "hostname"
def __init__(self, module):
super(FreeBSDStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path(self.COMMAND, True)
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
if not os.path.isfile(self.FILE):
return ''
try:
for line in get_file_lines(self.FILE):
line = line.strip()
if line.startswith('hostname='):
return line[10:].strip('"')
except Exception as e:
self.module.fail_json(
msg="failed to read hostname: %s" % to_native(e),
exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
if os.path.isfile(self.FILE):
lines = [x.strip() for x in get_file_lines(self.FILE)]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
else:
lines = ['hostname="%s"' % name]
with open(self.FILE, 'w') as f:
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(
msg="failed to update hostname: %s" % to_native(e),
exception=traceback.format_exc())
class DarwinStrategy(BaseStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
strategy_class = UnimplementedStrategy # type: t.Type[BaseStrategy]
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif platform.system() == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
# This is Linux and systemd is active
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy # type: t.Type[BaseStrategy]
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class AnolisOSHostname(Hostname):
platform = 'Linux'
distribution = 'Anolis'
strategy_class = RedHatStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = FileStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = FileStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = FileStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = FileStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = FileStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = FileStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = FileStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = FileStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = FileStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = FileStrategy
class UosHostname(Hostname):
platform = 'Linux'
distribution = 'Uos'
strategy_class = FileStrategy
class DeepinHostname(Hostname):
platform = 'Linux'
distribution = 'Deepin'
strategy_class = FileStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = FileStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = FileStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = FileStrategy
class EurolinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Eurolinux'
strategy_class = RedHatStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=list(STRATS.keys()))
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,348 |
ansible-galaxy collection install fails with `dir` type: "No such file or directory"
|
### Summary
```sh
ansible-galaxy install -r requirements.yml
```
doesn't work when pointing at a collection root directory
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../
type: dir
```
I get an error
> ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../ansible-group-yaml-inventory/est/README.md'
but it works with a build phase, and later pointing at the tarball
```
ansible-galaxy collection build --output-path ../build ../
```
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../build/widespot-group_yaml_inventory-0.1.1.tar.gz
type: file
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/6.0.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_HOST_LIST(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/inventory.yml']
INVENTORY_ENABLED(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['widespot.group_yaml_inventory.group_yaml']
```
### OS / Environment
OSX 12.0.1, on Macbook Silicon M1
### Steps to Reproduce
1. clone [this repo](https://github.com/widespot/ansible-group-yaml-inventory/tree/cfbbd80d276a887027b49d4b0807b116e23b4d92) (mind the commit)
2. `cd test`
3. execute 3 commands listed in `test/README.md`
```sh
mkdir ../build
ansible-galaxy collection build --force --output-path ../build ../
ansible-galaxy install -r --force requirements.yml
ansible-inventory --list
```
=> β
working
4. change requirements.yml: uncomment the lines related to directory import
```yaml
#source: ../
#type: dir
```
5. re-try force install
```sh
ansible-galaxy install -r --force requirements.yml
```
=> β fail
## Important investigation note
`ansible-galaxy` build and install seems to generate a `FILES.json` file.
* When installing via tarball, the paths in that files are ok
* When installing via directory relative path, all the paths in that files are truncated
```
$ cat $HOME/.ansible/collections/ansible_collections/widespot/group_yaml_inventory/FILES.json
...
{
"name": "lugins/README.md",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "23e9939164cad964c2338b8059e4d3def72eef9523e32594503efd50960fcae4",
"format": 1
},
...
```
### Expected Results
I expect install to work with `dir` same as `file` after `install`
### Actual Results
```console
ansible-galaxy [core 2.13.2]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/bin/ansible-galaxy
python version = 3.9.12 (main, May 8 2022, 17:57:49) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.2
libyaml = True
Using /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg as config file
Reading requirement file at '/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/requirements.yml'
Starting galaxy collection install process
Found installed collection widespot.group_yaml_inventory:0.1.1 at '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Process install dependency map
Starting collection install process
Installing 'widespot.group_yaml_inventory:0.1.1' to '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Skipping '../venv' for collection build
Skipping '../.git' for collection build
Skipping '../galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../est/README.md'
the full traceback was:
Traceback (most recent call last):
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 647, in run
return context.CLIARGS['func']()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1297, in execute_install
self._execute_install_collection(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1325, in _execute_install_collection
install_collections(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 745, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1308, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install_src
collection_output_path = _build_collection_dir(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1234, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'../est/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78348
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-07-25T22:25:47Z |
python
| 2023-04-17T19:24:55Z |
changelogs/fragments/a-g-col-install-directory-with-trailing-sep.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,348 |
ansible-galaxy collection install fails with `dir` type: "No such file or directory"
|
### Summary
```sh
ansible-galaxy install -r requirements.yml
```
doesn't work when pointing at a collection root directory
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../
type: dir
```
I get an error
> ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../ansible-group-yaml-inventory/est/README.md'
but it works with a build phase, and later pointing at the tarball
```
ansible-galaxy collection build --output-path ../build ../
```
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../build/widespot-group_yaml_inventory-0.1.1.tar.gz
type: file
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/6.0.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_HOST_LIST(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/inventory.yml']
INVENTORY_ENABLED(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['widespot.group_yaml_inventory.group_yaml']
```
### OS / Environment
OSX 12.0.1, on Macbook Silicon M1
### Steps to Reproduce
1. clone [this repo](https://github.com/widespot/ansible-group-yaml-inventory/tree/cfbbd80d276a887027b49d4b0807b116e23b4d92) (mind the commit)
2. `cd test`
3. execute 3 commands listed in `test/README.md`
```sh
mkdir ../build
ansible-galaxy collection build --force --output-path ../build ../
ansible-galaxy install -r --force requirements.yml
ansible-inventory --list
```
=> β
working
4. change requirements.yml: uncomment the lines related to directory import
```yaml
#source: ../
#type: dir
```
5. re-try force install
```sh
ansible-galaxy install -r --force requirements.yml
```
=> β fail
## Important investigation note
`ansible-galaxy` build and install seems to generate a `FILES.json` file.
* When installing via tarball, the paths in that files are ok
* When installing via directory relative path, all the paths in that files are truncated
```
$ cat $HOME/.ansible/collections/ansible_collections/widespot/group_yaml_inventory/FILES.json
...
{
"name": "lugins/README.md",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "23e9939164cad964c2338b8059e4d3def72eef9523e32594503efd50960fcae4",
"format": 1
},
...
```
### Expected Results
I expect install to work with `dir` same as `file` after `install`
### Actual Results
```console
ansible-galaxy [core 2.13.2]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/bin/ansible-galaxy
python version = 3.9.12 (main, May 8 2022, 17:57:49) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.2
libyaml = True
Using /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg as config file
Reading requirement file at '/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/requirements.yml'
Starting galaxy collection install process
Found installed collection widespot.group_yaml_inventory:0.1.1 at '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Process install dependency map
Starting collection install process
Installing 'widespot.group_yaml_inventory:0.1.1' to '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Skipping '../venv' for collection build
Skipping '../.git' for collection build
Skipping '../galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../est/README.md'
the full traceback was:
Traceback (most recent call last):
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 647, in run
return context.CLIARGS['func']()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1297, in execute_install
self._execute_install_collection(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1325, in _execute_install_collection
install_collections(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 745, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1308, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install_src
collection_output_path = _build_collection_dir(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1234, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'../est/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78348
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-07-25T22:25:47Z |
python
| 2023-04-17T19:24:55Z |
lib/ansible/galaxy/dependency_resolution/dataclasses.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency structs."""
# FIXME: add caching all over the place
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import typing as t
from collections import namedtuple
from collections.abc import MutableSequence, MutableMapping
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
Collection = t.TypeVar(
'Collection',
'Candidate', 'Requirement',
'_ComputedReqKindsMixin',
)
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import HAS_PACKAGING, PkgReq
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
def _is_installed_collection_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _MANIFEST_JSON))
def _is_collection_dir(dir_path):
return (
_is_installed_collection_dir(dir_path) or
_is_collection_src_dir(dir_path)
)
def _find_collections_in_subdirs(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
subdir_glob_pattern = os.path.join(
b_dir_path,
# b'*', # namespace is supposed to be top-level per spec
b'*', # collection name
)
for subdir in iglob(subdir_glob_pattern):
if os.path.isfile(os.path.join(subdir, _MANIFEST_JSON)):
yield subdir
elif os.path.isfile(os.path.join(subdir, _GALAXY_YAML)):
yield subdir
def _is_collection_namespace_dir(tested_str):
return any(_find_collections_in_subdirs(tested_str))
def _is_file_path(tested_str):
return os.path.isfile(to_bytes(tested_str, errors='surrogate_or_strict'))
def _is_http_url(tested_str):
return urlparse(tested_str).scheme.lower() in {'http', 'https'}
def _is_git_url(tested_str):
return tested_str.startswith(('git+', 'git@'))
def _is_concrete_artifact_pointer(tested_str):
return any(
predicate(tested_str)
for predicate in (
# NOTE: Maintain the checks to be sorted from light to heavy:
_is_git_url,
_is_http_url,
_is_file_path,
_is_collection_dir,
_is_collection_namespace_dir,
)
)
class _ComputedReqKindsMixin:
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
info_path = self.construct_galaxy_info_path(to_bytes(self.src, errors='surrogate_or_strict'))
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
art_mgr, # type: ConcreteArtifactsManager
): # type: (...) -> Collection
"""Make collection from an unspecified dir type.
This alternative constructor attempts to grab metadata from the
given path if it's a directory. If there's no metadata, it
falls back to guessing the FQCN based on the directory path and
sets the version to "*".
It raises a ValueError immediately if the input is not an
existing directory path.
"""
if not os.path.isdir(dir_path):
raise ValueError(
"The collection directory '{path!s}' doesn't exist".
format(path=to_native(dir_path)),
)
try:
return cls.from_dir_path(dir_path, art_mgr)
except ValueError:
return cls.from_dir_path_implicit(dir_path)
@classmethod
def from_dir_path(cls, dir_path, art_mgr):
"""Make collection from an directory with metadata."""
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
if not _is_collection_dir(b_dir_path):
display.warning(
u"Collection at '{path!s}' does not have a {manifest_json!s} "
u'file, nor has it {galaxy_yml!s}: cannot detect version.'.
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
'`dir_path` argument must be an installed or a source'
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
try:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
except TypeError as err:
# Looks like installed/source dir but isn't: doesn't have valid metadata.
display.warning(
u"Collection at '{path!s}' has a {manifest_json!s} "
u"or {galaxy_yml!s} file but it contains invalid metadata.".
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
"Collection at '{path!s}' has invalid metadata".
format(path=to_text(dir_path, errors='surrogate_or_strict'))
) from err
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
): # type: (...) -> Collection
"""Construct a collection instance based on an arbitrary dir.
This alternative constructor infers the FQCN based on the parent
and current directory names. It also sets the version to "*"
regardless of whether any of known metadata files are present.
"""
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input) or AnsibleCollectionRef.is_valid_collection_name(collection_input):
# Arg is a file path or URL to a collection, or just a collection
req['name'] = collection_input
elif ':' in collection_input:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
else:
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
try:
pkg_req = PkgReq(collection_input)
except Exception as e:
# packaging doesn't know what this is, let it fly, better errors happen in from_requirement_dict
req['name'] = collection_input
else:
req['name'] = pkg_req.name
if pkg_req.specifier:
req['version'] = to_text(pkg_req.specifier)
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@classmethod
def from_requirement_dict(cls, collection_req, art_mgr, validate_signature_options=True):
req_name = collection_req.get('name', None)
req_version = collection_req.get('version', '*')
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if validate_signature_options and art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
_ALLOW_CONCRETE_POINTER_IN_SOURCE
and req_source is not None
and _is_concrete_artifact_pointer(req_source)
):
src_path = req_source
elif (
req_name is not None
and AnsibleCollectionRef.is_valid_collection_name(req_name)
):
req_type = 'galaxy'
elif (
req_name is not None
and _is_concrete_artifact_pointer(req_name)
):
src_path, req_name = req_name, None
else:
dir_tip_tmpl = ( # NOTE: leading LFs are for concat
'\n\nTip: Make sure you are pointing to the right '
'subdirectory β `{src!s}` looks like a directory '
'but it is neither a collection, nor a namespace '
'dir.'
)
if req_source is not None and os.path.isdir(req_source):
tip = dir_tip_tmpl.format(src=req_source)
elif req_name is not None and os.path.isdir(req_name):
tip = dir_tip_tmpl.format(src=req_name)
elif req_name:
tip = '\n\nCould not find {0}.'.format(req_name)
else:
tip = ''
raise AnsibleError( # NOTE: I'd prefer a ValueError instead
'Neither the collection requirement entry key '
"'name', nor 'source' point to a concrete "
"resolvable collection artifact. Also 'name' is "
'not an FQCN. A valid collection name must be in '
'the format <namespace>.<collection>. Please make '
'sure that the namespace and the collection name '
'contain characters from [a-zA-Z0-9_] only.'
'{extra_tip!s}'.format(extra_tip=tip),
)
if req_type is None:
if _is_git_url(src_path):
req_type = 'git'
req_source = src_path
elif _is_http_url(src_path):
req_type = 'url'
req_source = src_path
elif _is_file_path(src_path):
req_type = 'file'
req_source = src_path
elif _is_collection_dir(src_path):
if _is_installed_collection_dir(src_path) and _is_collection_src_dir(src_path):
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
raise AnsibleError(
u"Collection requirement at '{path!s}' has both a {manifest_json!s} "
u"file and a {galaxy_yml!s}.\nThe requirement must either be an installed "
u"collection directory or a source collection directory, not both.".
format(
path=to_text(src_path, errors='surrogate_or_strict'),
manifest_json=to_text(_MANIFEST_JSON),
galaxy_yml=to_text(_GALAXY_YAML),
)
)
req_type = 'dir'
req_source = src_path
elif _is_collection_namespace_dir(src_path):
req_name = None # No name for a virtual req or "namespace."?
req_type = 'subdirs'
req_source = src_path
else:
raise AnsibleError( # NOTE: this is never supposed to be hit
'Failed to automatically detect the collection '
'requirement type.',
)
if req_type not in {'file', 'galaxy', 'git', 'url', 'dir', 'subdirs'}:
raise AnsibleError(
"The collection requirement entry key 'type' must be "
'one of file, galaxy, git, dir, subdirs, or url.'
)
if req_name is None and req_type == 'galaxy':
raise AnsibleError(
'Collections requirement entry should contain '
"the key 'name' if it's requested from a Galaxy-like "
'index server.',
)
if req_type != 'galaxy' and req_source is None:
req_source, req_name = req_name, None
if (
req_type == 'galaxy' and
isinstance(req_source, GalaxyAPI) and
not _is_http_url(req_source.api_server)
):
raise AnsibleError(
"Collections requirement 'source' entry should contain "
'a valid Galaxy API URL but it does not: {not_url!s} '
'is not an HTTP URL.'.
format(not_url=req_source.api_server),
)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
if req_type not in {'galaxy', 'subdirs'} and req_version == '*':
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
return (
'<{self!s} of type {coll_type!r} from {src!s}>'.
format(self=self, coll_type=self.type, src=self.src or 'Galaxy')
)
def __str__(self):
return to_native(self.__unicode__())
def __unicode__(self):
if self.fqcn is None:
return (
u'"virtual collection Git repo"' if self.is_scm
else u'"virtual collection namespace"'
)
return (
u'{fqcn!s}:{ver!s}'.
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_collection_path):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(b_collection_path, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_metadata_dir = to_bytes(os.path.sep).join(b_path_parts)
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@property
def namespace(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a namespace')
return self._get_separate_ns_n_name()[0]
@property
def name(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a name')
return self._get_separate_ns_n_name()[-1]
@property
def canonical_package_id(self):
if not self.is_virtual:
return to_native(self.fqcn)
return (
'<virtual namespace from {src!s} of type {src_type!s}>'.
format(src=to_native(self.src), src_type=to_native(self.type))
)
@property
def is_virtual(self):
return self.is_scm or self.is_subdirs
@property
def is_file(self):
return self.type == 'file'
@property
def is_dir(self):
return self.type == 'dir'
@property
def namespace_collection_paths(self):
return [
to_native(path)
for path in _find_collections_in_subdirs(self.src)
]
@property
def is_subdirs(self):
return self.type == 'subdirs'
@property
def is_url(self):
return self.type == 'url'
@property
def is_scm(self):
return self.type == 'git'
@property
def is_concrete_artifact(self):
return self.type in {'git', 'url', 'file', 'dir', 'subdirs'}
@property
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources')) # type: ignore[name-match]
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures')) # type: ignore[name-match]
class Requirement(
_ComputedReqKindsMixin,
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()
def with_signatures_repopulated(self): # type: (Candidate) -> Candidate
"""Populate a new Candidate instance with Galaxy signatures.
:raises AnsibleAssertionError: If the supplied candidate is not sourced from a Galaxy-like index.
"""
if self.type != 'galaxy':
raise AnsibleAssertionError(f"Invalid collection type for {self!r}: unable to get signatures from a galaxy server.")
signatures = self.src.get_collection_signatures(self.namespace, self.name, self.ver)
return self.__class__(self.fqcn, self.ver, self.src, self.type, frozenset([*self.signatures, *signatures]))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,348 |
ansible-galaxy collection install fails with `dir` type: "No such file or directory"
|
### Summary
```sh
ansible-galaxy install -r requirements.yml
```
doesn't work when pointing at a collection root directory
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../
type: dir
```
I get an error
> ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../ansible-group-yaml-inventory/est/README.md'
but it works with a build phase, and later pointing at the tarball
```
ansible-galaxy collection build --output-path ../build ../
```
```yaml
collections:
- name: widespot.group_yaml_inventory
source: ../build/widespot-group_yaml_inventory-0.1.1.tar.gz
type: file
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.1]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/6.0.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
DEFAULT_HOST_LIST(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/inventory.yml']
INVENTORY_ENABLED(/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg) = ['widespot.group_yaml_inventory.group_yaml']
```
### OS / Environment
OSX 12.0.1, on Macbook Silicon M1
### Steps to Reproduce
1. clone [this repo](https://github.com/widespot/ansible-group-yaml-inventory/tree/cfbbd80d276a887027b49d4b0807b116e23b4d92) (mind the commit)
2. `cd test`
3. execute 3 commands listed in `test/README.md`
```sh
mkdir ../build
ansible-galaxy collection build --force --output-path ../build ../
ansible-galaxy install -r --force requirements.yml
ansible-inventory --list
```
=> β
working
4. change requirements.yml: uncomment the lines related to directory import
```yaml
#source: ../
#type: dir
```
5. re-try force install
```sh
ansible-galaxy install -r --force requirements.yml
```
=> β fail
## Important investigation note
`ansible-galaxy` build and install seems to generate a `FILES.json` file.
* When installing via tarball, the paths in that files are ok
* When installing via directory relative path, all the paths in that files are truncated
```
$ cat $HOME/.ansible/collections/ansible_collections/widespot/group_yaml_inventory/FILES.json
...
{
"name": "lugins/README.md",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "23e9939164cad964c2338b8059e4d3def72eef9523e32594503efd50960fcae4",
"format": 1
},
...
```
### Expected Results
I expect install to work with `dir` same as `file` after `install`
### Actual Results
```console
ansible-galaxy [core 2.13.2]
config file = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg
configured module search path = ['/Users/raphaeljoie/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible
ansible collection location = /Users/raphaeljoie/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/bin/ansible-galaxy
python version = 3.9.12 (main, May 8 2022, 17:57:49) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.2
libyaml = True
Using /Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/ansible.cfg as config file
Reading requirement file at '/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/test/requirements.yml'
Starting galaxy collection install process
Found installed collection widespot.group_yaml_inventory:0.1.1 at '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Process install dependency map
Starting collection install process
Installing 'widespot.group_yaml_inventory:0.1.1' to '/Users/raphaeljoie/.ansible/collections/ansible_collections/widespot/group_yaml_inventory'
Skipping '../venv' for collection build
Skipping '../.git' for collection build
Skipping '../galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'../est/README.md'
the full traceback was:
Traceback (most recent call last):
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 647, in run
return context.CLIARGS['func']()
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1297, in execute_install
self._execute_install_collection(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/cli/galaxy.py", line 1325, in _execute_install_collection
install_collections(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 745, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1308, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1446, in install_src
collection_output_path = _build_collection_dir(
File "/Users/raphaeljoie/Workspace/github.com/widespot/ansible-group-yaml-inventory/venv/lib/python3.9/site-packages/ansible/galaxy/collection/__init__.py", line 1234, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'../est/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78348
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-07-25T22:25:47Z |
python
| 2023-04-17T19:24:55Z |
test/integration/targets/ansible-galaxy-collection/tasks/install.yml
|
---
- name: create test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: directory
- name: install simple collection from first accessible server
command: ansible-galaxy collection install namespace1.name1 -vvvv
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- 'from_first_good_server.stdout|regex_findall("has not signed namespace1\.name1")|length == 1'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install simple collection with implicit path - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_normal
- name: get installed files of install simple collection with implicit path - {{ test_id }}
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection with implicit path - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection with implicit path - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_normal.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: install existing without --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_no_force
- name: assert install existing without --force - {{ test_id }}
assert:
that:
- '"Nothing to do. All requested collections are already installed" in install_existing_no_force.stdout'
- name: install existing with --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' --force {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_force
- name: assert install existing with --force - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_existing_force.stdout'
- name: remove test installed collection - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install pre-release as explicit version to custom dir - {{ test_id }}
command: ansible-galaxy collection install 'namespace1.name1:1.1.0-beta.1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release as explicit version to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release as explicit version to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: Remove beta
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
- name: install pre-release version with --pre to custom dir - {{ test_id }}
command: ansible-galaxy collection install --pre 'namespace1.name1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release version with --pre to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release version with --pre to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: install multiple collections with dependencies - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection:1.0.0 namespace2.name -s {{ test_name }} {{ galaxy_verbosity }}
args:
chdir: '{{ galaxy_dir }}/ansible_collections'
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
register: install_multiple_with_dep
- name: get result of install multiple collections with dependencies - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_multiple_with_dep_actual
loop_control:
loop_var: collection
loop:
- namespace: namespace2
name: name
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert install multiple collections with dependencies - {{ test_id }}
assert:
that:
- (install_multiple_with_dep_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[2].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_multiple_with_dep_actual.results[3].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: expect failure with dep resolution failure - {{ test_id }}
command: ansible-galaxy collection install fail_namespace.fail_collection -s {{ test_name }} {{ galaxy_verbosity }}
register: fail_dep_mismatch
failed_when:
- '"Could not satisfy the following requirements" not in fail_dep_mismatch.stderr'
- '" fail_dep2.name:<0.0.5 (dependency of fail_namespace.fail_collection:2.1.2)" not in fail_dep_mismatch.stderr'
- name: Find artifact url for namespace3.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace3/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: download a collection for an offline install - {{ test_id }}
get_url:
url: '{{ artifact_url_response.json.download_url }}'
dest: '{{ galaxy_dir }}/namespace3.tar.gz'
- name: install a collection from a tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/namespace3.tar.gz' {{ galaxy_verbosity }}
register: install_tarball
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a tarball - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace3/name/MANIFEST.json'
register: install_tarball_actual
- name: assert install a collection from a tarball - {{ test_id }}
assert:
that:
- '"Installing ''namespace3.name:1.0.0'' to" in install_tarball.stdout'
- (install_tarball_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: write a requirements file using the artifact and a conflicting version
copy:
content: |
collections:
- name: {{ galaxy_dir }}/namespace3.tar.gz
version: 1.2.0
dest: '{{ galaxy_dir }}/test_req.yml'
- name: install the requirements file with mismatched versions
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/test_req.yml' {{ galaxy_verbosity }}
ignore_errors: True
register: result
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: remove the requirements file
file:
path: '{{ galaxy_dir }}/test_req.yml'
state: absent
- assert:
that: error == expected_error
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (direct request)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
- name: test error for mismatched dependency versions
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (dependency of tmp_parent.name:1.0.0)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
block:
- name: init a new parent collection
command: ansible-galaxy collection init tmp_parent.name --init-path '{{ galaxy_dir }}/scratch'
- name: replace the dependencies
lineinfile:
path: "{{ galaxy_dir }}/scratch/tmp_parent/name/galaxy.yml"
regexp: "^dependencies:*"
line: "dependencies: { '{{ galaxy_dir }}/namespace3.tar.gz': '1.2.0' }"
- name: build the new artifact
command: ansible-galaxy collection build {{ galaxy_dir }}/scratch/tmp_parent/name
args:
chdir: "{{ galaxy_dir }}"
- name: install the artifact to verify the error is handled
command: ansible-galaxy collection install '{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz'
ignore_errors: yes
register: result
- debug: msg="Actual - {{ error }}"
- debug: msg="Expected - {{ expected_error }}"
- assert:
that: error == expected_error
always:
- name: clean up collection skeleton and artifact
file:
state: absent
path: "{{ item }}"
loop:
- "{{ galaxy_dir }}/scratch/tmp_parent/"
- "{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz"
- name: setup bad tarball - {{ test_id }}
script: build_bad_tar.py {{ galaxy_dir | quote }}
- name: fail to install a collection from a bad tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/suspicious-test-1.0.0.tar.gz' {{ galaxy_verbosity }}
register: fail_bad_tar
failed_when: fail_bad_tar.rc != 1 and "Cannot extract tar entry '../../outside.sh' as it will be placed outside the collection directory" not in fail_bad_tar.stderr
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of failed collection install - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections\suspicious'
register: fail_bad_tar_actual
- name: assert result of failed collection install - {{ test_id }}
assert:
that:
- not fail_bad_tar_actual.stat.exists
- name: Find artifact url for namespace4.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace4/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: install a collection from a URI - {{ test_id }}
command: ansible-galaxy collection install {{ artifact_url_response.json.download_url}} {{ galaxy_verbosity }}
register: install_uri
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a URI - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace4/name/MANIFEST.json'
register: install_uri_actual
- name: assert install a collection from a URI - {{ test_id }}
assert:
that:
- '"Installing ''namespace4.name:1.0.0'' to" in install_uri.stdout'
- (install_uri_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: fail to install a collection with an undefined URL - {{ test_id }}
command: ansible-galaxy collection install namespace5.name {{ galaxy_verbosity }}
register: fail_undefined_server
failed_when: '"No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined" not in fail_undefined_server.stderr'
environment:
ANSIBLE_GALAXY_SERVER_LIST: undefined
- when: not requires_auth
block:
- name: install a collection with an empty server list - {{ test_id }}
command: ansible-galaxy collection install namespace5.name -s '{{ test_server }}' {{ galaxy_verbosity }}
register: install_empty_server_list
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_SERVER_LIST: ''
- name: get result of a collection with an empty server list - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace5/name/MANIFEST.json'
register: install_empty_server_list_actual
- name: assert install a collection with an empty server list - {{ test_id }}
assert:
that:
- '"Installing ''namespace5.name:1.0.0'' to" in install_empty_server_list.stdout'
- (install_empty_server_list_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with both roles and collections - {{ test_id }}
copy:
content: |
collections:
- namespace6.name
- name: namespace7.name
roles:
- skip.me
dest: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
- name: install roles from requirements file with collection-only keyring option
command: ansible-galaxy role install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }}
vars:
req_file: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_opt
- assert:
that:
- invalid_opt is failed
- "'unrecognized arguments: --keyring' in invalid_opt.stderr"
# Need to run with -vvv to validate the roles will be skipped msg
- name: install collections only with requirements-with-role.yml - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml' -s '{{ test_name }}' -vvv
register: install_req_collection
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections only with requirements-with-roles.yml - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_collection_actual
loop_control:
loop_var: collection
loop:
- namespace6
- namespace7
- name: assert install collections only with requirements-with-role.yml - {{ test_id }}
assert:
that:
- '"contains roles which will be ignored" in install_req_collection.stdout'
- '"Installing ''namespace6.name:1.0.0'' to" in install_req_collection.stdout'
- '"Installing ''namespace7.name:1.0.0'' to" in install_req_collection.stdout'
- (install_req_collection_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_collection_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with just collections - {{ test_id }}
copy:
content: |
collections:
- namespace8.name
- name: namespace9.name
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
- name: install collections with ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}'
register: install_req
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: Test deviations on -r and --role-file without collection or role sub command
command: '{{ cmd }}'
loop:
- ansible-galaxy install -vr '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vv
- ansible-galaxy install --role-file '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
- ansible-galaxy install --role-file='{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
loop_control:
loop_var: cmd
- name: uninstall collections for next requirements file test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: rewrite requirements file with collections and signatures
copy:
content: |
collections:
- name: namespace7.name
version: "1.0.0"
signatures:
- "{{ not_mine }}"
- "{{ also_not_mine }}"
- "file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
- namespace8.name
- name: namespace9.name
signatures:
- "file://{{ gpg_homedir }}/namespace9-name-1.0.0-MANIFEST.json.asc"
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
vars:
not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
also_not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
- name: installing only roles does not fail if keyring for collections is not provided
command: ansible-galaxy role install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml
register: roles_only
- assert:
that:
- roles_only is success
- name: installing only roles implicitly does not fail if keyring for collections is not provided
# if -p/--roles-path are specified, only roles are installed
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }} -p {{ galaxy_dir }}
register: roles_only
- assert:
that:
- roles_only is success
- name: installing roles and collections requires keyring if collections have signatures
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }}
ignore_errors: yes
register: collections_and_roles
- assert:
that:
- collections_and_roles is failed
- "'no keyring was configured' in collections_and_roles.stderr"
- name: install collection with mutually exclusive options
command: ansible-galaxy collection install -r {{ req_file }} -s {{ test_name }} {{ cli_signature }}
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
# --signature is an ansible-galaxy collection install subcommand, but mutually exclusive with -r
cli_signature: "--signature file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: mutually_exclusive_opts
- assert:
that:
- mutually_exclusive_opts is failed
- expected_error in actual_error
vars:
expected_error: >-
The --signatures option and --requirements-file are mutually exclusive.
Use the --signatures with positional collection_name args or provide a
'signatures' key for requirements in the --requirements-file.
actual_error: "{{ mutually_exclusive_opts.stderr }}"
- name: install a collection with user-supplied signatures for verification but no keyring
command: ansible-galaxy collection install namespace1.name1:1.0.0 {{ cli_signature }}
vars:
cli_signature: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: required_together
- assert:
that:
- required_together is failed
- '"ERROR! Signatures were provided to verify namespace1.name1 but no keyring was configured." in required_together.stderr'
- name: install collections with ansible-galaxy install -r with invalid signatures - {{ test_id }}
# Note that --keyring is a valid option for 'ansible-galaxy install -r ...', not just 'ansible-galaxy collection ...'
command: ansible-galaxy install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }} {{ galaxy_verbosity }}
register: install_req
ignore_errors: yes
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
- name: assert invalid signature is fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is failed
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed" in install_req.stderr'
# The other collections shouldn't be installed because they're listed
# after the failing collection and --ignore-errors was not provided
- '"Installing ''namespace8.name:1.0.0'' to" not in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" not in install_req.stdout'
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collections with ansible-galaxy install and --ignore-errors - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} -vvvv
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }} --ignore-errors"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
# SIVEL
- name: assert invalid signature is not fatal with ansible-galaxy install --ignore-errors - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." in install_stderr'
- '"Failed to install collection namespace7.name:1.0.0 but skipping due to --ignore-errors being set." in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: install collections with only one valid signature using ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }}
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert just one valid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: install collections with only one valid signature by ignoring the other errors
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }} --ignore-signature-status-code FAILURE
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES: BADSIG # cli option is appended and both status codes are ignored
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert invalid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
# Uncomment once pulp container is at pulp>=0.5.0
#- name: install cache.cache at the current latest version
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' -vvv
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- set_fact:
# cache_version_build: '{{ (cache_version_build | int) + 1 }}'
#
#- name: publish update for cache.cache test
# setup_collections:
# server: galaxy_ng
# collections:
# - namespace: cache
# name: cache
# version: 1.0.{{ cache_version_build }}
#
#- name: make sure the cache version list is ignored on a collection version change - {{ test_id }}
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' --force -vvv
# register: install_cached_update
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- name: get result of cache version list is ignored on a collection version change - {{ test_id }}
# slurp:
# path: '{{ galaxy_dir }}/ansible_collections/cache/cache/MANIFEST.json'
# register: install_cached_update_actual
#
#- name: assert cache version list is ignored on a collection version change - {{ test_id }}
# assert:
# that:
# - '"Installing ''cache.cache:1.0.{{ cache_version_build }}'' to" in install_cached_update.stdout'
# - (install_cached_update_actual.content | b64decode | from_json).collection_info.version == '1.0.' ~ cache_version_build
- name: install collection with symlink - {{ test_id }}
command: ansible-galaxy collection install symlink.symlink -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink
- find:
paths: '{{ galaxy_dir }}/ansible_collections/symlink/symlink'
recurse: yes
file_type: any
- name: get result of install collection with symlink - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink/symlink/{{ path }}'
register: install_symlink_actual
loop_control:
loop_var: path
loop:
- REΓ
DMΓ.md-link
- docs/REΓ
DMΓ.md
- plugins/REΓ
DMΓ.md
- REΓ
DMΓ.md-outside-link
- docs-link
- docs-link/REΓ
DMΓ.md
- name: assert install collection with symlink - {{ test_id }}
assert:
that:
- '"Installing ''symlink.symlink:1.0.0'' to" in install_symlink.stdout'
- install_symlink_actual.results[0].stat.islnk
- install_symlink_actual.results[0].stat.lnk_target == 'REΓ
DMΓ.md'
- install_symlink_actual.results[1].stat.islnk
- install_symlink_actual.results[1].stat.lnk_target == '../REΓ
DMΓ.md'
- install_symlink_actual.results[2].stat.islnk
- install_symlink_actual.results[2].stat.lnk_target == '../REΓ
DMΓ.md'
- install_symlink_actual.results[3].stat.isreg
- install_symlink_actual.results[4].stat.islnk
- install_symlink_actual.results[4].stat.lnk_target == 'docs'
- install_symlink_actual.results[5].stat.islnk
- install_symlink_actual.results[5].stat.lnk_target == '../REΓ
DMΓ.md'
# Testing an install from source to check that symlinks to directories
# are preserved (see issue https://github.com/ansible/ansible/issues/78442)
- name: symlink_dirs collection install from source test
block:
- name: create symlink_dirs collection
command: ansible-galaxy collection init symlink_dirs.symlink_dirs --init-path "{{ galaxy_dir }}/scratch"
- name: create directory in collection
file:
path: "{{ galaxy_dir }}/scratch/symlink_dirs/symlink_dirs/folderA"
state: directory
- name: create symlink to folderA
file:
dest: "{{ galaxy_dir }}/scratch/symlink_dirs/symlink_dirs/folderB"
src: ./folderA
state: link
force: yes
- name: install symlink_dirs collection from source
command: ansible-galaxy collection install {{ galaxy_dir }}/scratch/symlink_dirs/
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink_dirs
- name: get result of install collection with symlink_dirs - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink_dirs/symlink_dirs/{{ path }}'
register: install_symlink_dirs_actual
loop_control:
loop_var: path
loop:
- folderA
- folderB
- name: assert install collection with symlink_dirs - {{ test_id }}
assert:
that:
- '"Installing ''symlink_dirs.symlink_dirs:1.0.0'' to" in install_symlink_dirs.stdout'
- install_symlink_dirs_actual.results[0].stat.isdir
- install_symlink_dirs_actual.results[1].stat.islnk
- install_symlink_dirs_actual.results[1].stat.lnk_target == './folderA'
always:
- name: clean up symlink_dirs collection directory
file:
path: "{{ galaxy_dir }}/scratch/symlink_dirs"
state: absent
- name: remove install directory for the next test because parent_dep.parent_collection was installed - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
- name: install collection and dep compatible with multiple requirements - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection parent_dep2.parent_collection
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''parent_dep.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''parent_dep2.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''child_dep.child_collection:0.5.0'' to" in install_req.stdout'
- name: install a collection to a directory that contains another collection with no metadata
block:
# Collections are usable in ansible without a galaxy.yml or MANIFEST.json
- name: create a collection directory
file:
state: directory
path: '{{ galaxy_dir }}/ansible_collections/unrelated_namespace/collection_without_metadata/plugins'
- name: install a collection to the same installation directory - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert installed collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_req.stdout'
- name: remove test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collection with signature with invalid keyring
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_option }} {{ keyring_option }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_option: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring_option: '--keyring {{ gpg_homedir }}/i_do_not_exist.kbx'
ignore_errors: yes
register: keyring_error
- assert:
that:
- keyring_error is failed
- expected_errors[0] in actual_error
- expected_errors[1] in actual_error
- expected_errors[2] in actual_error
- unexpected_warning not in actual_warning
vars:
keyring: "{{ gpg_homedir }}/i_do_not_exist.kbx"
expected_errors:
- "Signature verification failed for 'namespace1.name1' (return code 2):"
- "* The public key is not available."
- >-
* It was not possible to check the signature. This may be caused
by a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public key.
unexpected_warning: >-
The GnuPG keyring used for collection signature
verification was not configured but signatures were
provided by the Galaxy server to verify authenticity.
Configure a keyring for ansible-galaxy to use
or disable signature verification.
Skipping signature verification.
actual_warning: "{{ keyring_error.stderr | regex_replace('\\n', ' ') }}"
# Remove formatting from the reason so it's one line
actual_error: "{{ keyring_error.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
# TODO: Uncomment once signatures are provided by pulp-galaxy-ng
#- name: install collection with signature provided by Galaxy server (no keyring)
# command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
# ANSIBLE_NOCOLOR: True
# ANSIBLE_FORCE_COLOR: False
# ignore_errors: yes
# register: keyring_warning
#
#- name: assert a warning was given but signature verification did not occur without configuring the keyring
# assert:
# that:
# - keyring_warning is not failed
# - - '"Installing ''namespace1.name1:1.0.9'' to" in keyring_warning.stdout'
# # TODO: Don't just check the stdout, make sure the collection was installed.
# - expected_warning in actual_warning
# vars:
# expected_warning: >-
# The GnuPG keyring used for collection signature
# verification was not configured but signatures were
# provided by the Galaxy server to verify authenticity.
# Configure a keyring for ansible-galaxy to use
# or disable signature verification.
# Skipping signature verification.
# actual_warning: "{{ keyring_warning.stderr | regex_replace('\\n', ' ') }}"
- name: install simple collection from first accessible server with valid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_signature
- assert:
that:
- invalid_signature is failed
- "'Not installing namespace1.name1 because GnuPG signature verification failed.' in invalid_signature.stderr"
- expected_errors[0] in install_stdout
- expected_errors[1] in install_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
# Remove formatting from the reason so it's one line
install_stdout: "{{ invalid_signature.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
- name: validate collection directory was not created
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
register: collection_dir
check_mode: yes
failed_when: collection_dir is changed
- name: disable signature verification and install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }} --disable-gpg-verify"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: ignore_invalid_signature
- assert:
that:
- ignore_invalid_signature is success
- '"Installing ''namespace1.name1:1.0.9'' to" in ignore_invalid_signature.stdout'
- name: use lenient signature verification (default) without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "all"
register: missing_signature
- assert:
that:
- missing_signature is success
- missing_signature.rc == 0
- '"namespace1.name1:1.0.0 was installed successfully" in missing_signature.stdout'
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" not in missing_signature.stdout'
- name: use strict signature verification without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "+1"
ignore_errors: yes
register: missing_signature
- assert:
that:
- missing_signature is failed
- missing_signature.rc == 1
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" in missing_signature.stdout'
- '"Not installing namespace1.name1 because GnuPG signature verification failed" in missing_signature.stderr'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: download collections with pre-release dep - {{ test_id }}
command: ansible-galaxy collection download dep_with_beta.parent namespace1.name1:1.1.0-beta.1 -p '{{ galaxy_dir }}/scratch'
- name: install collection with concrete pre-release dep - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/scratch/requirements.yml'
args:
chdir: '{{ galaxy_dir }}/scratch'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_concrete_pre
- name: get result of install collections with concrete pre-release dep - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/MANIFEST.json'
register: install_concrete_pre_actual
loop_control:
loop_var: collection
loop:
- namespace1/name1
- dep_with_beta/parent
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_concrete_pre.stdout'
- '"Installing ''dep_with_beta.parent:1.0.0'' to" in install_concrete_pre.stdout'
- (install_concrete_pre_actual.results[0].content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- (install_concrete_pre_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: remove collection dir after round of testing - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,803 |
ansible-galaxy collection install of local collection fails if collection directory ends with '/'
|
### Summary
When trying to install a collection from a local directory, it fails if the directory has a trailing slash (/). If the trailing slash is removed, the installation succeeds.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Ubuntu 20.04
Python 3.10.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-galaxy collection install -vvvvvv -f -p ~/devcollections ansible-collection-dir/
```
### Expected Results
I expected the collection to be installed.
### Actual Results
The relevant part of the output is "b'ansible-collection-dir/lugins/README.md'". As can be seen, the "plugins" directory name is being truncated by one character.
```console
ansible-galaxy [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible-galaxy
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/home/redacted/devcollections' is not part of the configured Ansible collections paths '/home/redacted/.ansible/collections:/usr/share/ansible/collections'. The installed collection will not be picked up in an Ansible run, unless
within a playbook-adjacent collections directory.
Process install dependency map
Starting collection install process
Installing 'mynamespace.mycollection:1.0.0' to '/home/redacted/devcollections/ansible_collections/mynamespace.mycollection'
Skipping 'ansible-collection-dir/.git' for collection build
Skipping 'ansible-collection-dir/galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
the full traceback was:
Traceback (most recent call last):
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 623, in cli_executor
exit_code = cli.run()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 719, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1276, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1410, in install_src
collection_output_path = _build_collection_dir(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1205, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77803
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-05-13T18:29:08Z |
python
| 2023-04-17T19:24:55Z |
changelogs/fragments/a-g-col-install-directory-with-trailing-sep.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,803 |
ansible-galaxy collection install of local collection fails if collection directory ends with '/'
|
### Summary
When trying to install a collection from a local directory, it fails if the directory has a trailing slash (/). If the trailing slash is removed, the installation succeeds.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Ubuntu 20.04
Python 3.10.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-galaxy collection install -vvvvvv -f -p ~/devcollections ansible-collection-dir/
```
### Expected Results
I expected the collection to be installed.
### Actual Results
The relevant part of the output is "b'ansible-collection-dir/lugins/README.md'". As can be seen, the "plugins" directory name is being truncated by one character.
```console
ansible-galaxy [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible-galaxy
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/home/redacted/devcollections' is not part of the configured Ansible collections paths '/home/redacted/.ansible/collections:/usr/share/ansible/collections'. The installed collection will not be picked up in an Ansible run, unless
within a playbook-adjacent collections directory.
Process install dependency map
Starting collection install process
Installing 'mynamespace.mycollection:1.0.0' to '/home/redacted/devcollections/ansible_collections/mynamespace.mycollection'
Skipping 'ansible-collection-dir/.git' for collection build
Skipping 'ansible-collection-dir/galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
the full traceback was:
Traceback (most recent call last):
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 623, in cli_executor
exit_code = cli.run()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 719, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1276, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1410, in install_src
collection_output_path = _build_collection_dir(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1205, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77803
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-05-13T18:29:08Z |
python
| 2023-04-17T19:24:55Z |
lib/ansible/galaxy/dependency_resolution/dataclasses.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency structs."""
# FIXME: add caching all over the place
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import typing as t
from collections import namedtuple
from collections.abc import MutableSequence, MutableMapping
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
Collection = t.TypeVar(
'Collection',
'Candidate', 'Requirement',
'_ComputedReqKindsMixin',
)
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import HAS_PACKAGING, PkgReq
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
def _is_installed_collection_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _MANIFEST_JSON))
def _is_collection_dir(dir_path):
return (
_is_installed_collection_dir(dir_path) or
_is_collection_src_dir(dir_path)
)
def _find_collections_in_subdirs(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
subdir_glob_pattern = os.path.join(
b_dir_path,
# b'*', # namespace is supposed to be top-level per spec
b'*', # collection name
)
for subdir in iglob(subdir_glob_pattern):
if os.path.isfile(os.path.join(subdir, _MANIFEST_JSON)):
yield subdir
elif os.path.isfile(os.path.join(subdir, _GALAXY_YAML)):
yield subdir
def _is_collection_namespace_dir(tested_str):
return any(_find_collections_in_subdirs(tested_str))
def _is_file_path(tested_str):
return os.path.isfile(to_bytes(tested_str, errors='surrogate_or_strict'))
def _is_http_url(tested_str):
return urlparse(tested_str).scheme.lower() in {'http', 'https'}
def _is_git_url(tested_str):
return tested_str.startswith(('git+', 'git@'))
def _is_concrete_artifact_pointer(tested_str):
return any(
predicate(tested_str)
for predicate in (
# NOTE: Maintain the checks to be sorted from light to heavy:
_is_git_url,
_is_http_url,
_is_file_path,
_is_collection_dir,
_is_collection_namespace_dir,
)
)
class _ComputedReqKindsMixin:
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
info_path = self.construct_galaxy_info_path(to_bytes(self.src, errors='surrogate_or_strict'))
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
art_mgr, # type: ConcreteArtifactsManager
): # type: (...) -> Collection
"""Make collection from an unspecified dir type.
This alternative constructor attempts to grab metadata from the
given path if it's a directory. If there's no metadata, it
falls back to guessing the FQCN based on the directory path and
sets the version to "*".
It raises a ValueError immediately if the input is not an
existing directory path.
"""
if not os.path.isdir(dir_path):
raise ValueError(
"The collection directory '{path!s}' doesn't exist".
format(path=to_native(dir_path)),
)
try:
return cls.from_dir_path(dir_path, art_mgr)
except ValueError:
return cls.from_dir_path_implicit(dir_path)
@classmethod
def from_dir_path(cls, dir_path, art_mgr):
"""Make collection from an directory with metadata."""
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
if not _is_collection_dir(b_dir_path):
display.warning(
u"Collection at '{path!s}' does not have a {manifest_json!s} "
u'file, nor has it {galaxy_yml!s}: cannot detect version.'.
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
'`dir_path` argument must be an installed or a source'
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
try:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
except TypeError as err:
# Looks like installed/source dir but isn't: doesn't have valid metadata.
display.warning(
u"Collection at '{path!s}' has a {manifest_json!s} "
u"or {galaxy_yml!s} file but it contains invalid metadata.".
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
"Collection at '{path!s}' has invalid metadata".
format(path=to_text(dir_path, errors='surrogate_or_strict'))
) from err
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
): # type: (...) -> Collection
"""Construct a collection instance based on an arbitrary dir.
This alternative constructor infers the FQCN based on the parent
and current directory names. It also sets the version to "*"
regardless of whether any of known metadata files are present.
"""
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input) or AnsibleCollectionRef.is_valid_collection_name(collection_input):
# Arg is a file path or URL to a collection, or just a collection
req['name'] = collection_input
elif ':' in collection_input:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
else:
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
try:
pkg_req = PkgReq(collection_input)
except Exception as e:
# packaging doesn't know what this is, let it fly, better errors happen in from_requirement_dict
req['name'] = collection_input
else:
req['name'] = pkg_req.name
if pkg_req.specifier:
req['version'] = to_text(pkg_req.specifier)
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@classmethod
def from_requirement_dict(cls, collection_req, art_mgr, validate_signature_options=True):
req_name = collection_req.get('name', None)
req_version = collection_req.get('version', '*')
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if validate_signature_options and art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
_ALLOW_CONCRETE_POINTER_IN_SOURCE
and req_source is not None
and _is_concrete_artifact_pointer(req_source)
):
src_path = req_source
elif (
req_name is not None
and AnsibleCollectionRef.is_valid_collection_name(req_name)
):
req_type = 'galaxy'
elif (
req_name is not None
and _is_concrete_artifact_pointer(req_name)
):
src_path, req_name = req_name, None
else:
dir_tip_tmpl = ( # NOTE: leading LFs are for concat
'\n\nTip: Make sure you are pointing to the right '
'subdirectory β `{src!s}` looks like a directory '
'but it is neither a collection, nor a namespace '
'dir.'
)
if req_source is not None and os.path.isdir(req_source):
tip = dir_tip_tmpl.format(src=req_source)
elif req_name is not None and os.path.isdir(req_name):
tip = dir_tip_tmpl.format(src=req_name)
elif req_name:
tip = '\n\nCould not find {0}.'.format(req_name)
else:
tip = ''
raise AnsibleError( # NOTE: I'd prefer a ValueError instead
'Neither the collection requirement entry key '
"'name', nor 'source' point to a concrete "
"resolvable collection artifact. Also 'name' is "
'not an FQCN. A valid collection name must be in '
'the format <namespace>.<collection>. Please make '
'sure that the namespace and the collection name '
'contain characters from [a-zA-Z0-9_] only.'
'{extra_tip!s}'.format(extra_tip=tip),
)
if req_type is None:
if _is_git_url(src_path):
req_type = 'git'
req_source = src_path
elif _is_http_url(src_path):
req_type = 'url'
req_source = src_path
elif _is_file_path(src_path):
req_type = 'file'
req_source = src_path
elif _is_collection_dir(src_path):
if _is_installed_collection_dir(src_path) and _is_collection_src_dir(src_path):
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
raise AnsibleError(
u"Collection requirement at '{path!s}' has both a {manifest_json!s} "
u"file and a {galaxy_yml!s}.\nThe requirement must either be an installed "
u"collection directory or a source collection directory, not both.".
format(
path=to_text(src_path, errors='surrogate_or_strict'),
manifest_json=to_text(_MANIFEST_JSON),
galaxy_yml=to_text(_GALAXY_YAML),
)
)
req_type = 'dir'
req_source = src_path
elif _is_collection_namespace_dir(src_path):
req_name = None # No name for a virtual req or "namespace."?
req_type = 'subdirs'
req_source = src_path
else:
raise AnsibleError( # NOTE: this is never supposed to be hit
'Failed to automatically detect the collection '
'requirement type.',
)
if req_type not in {'file', 'galaxy', 'git', 'url', 'dir', 'subdirs'}:
raise AnsibleError(
"The collection requirement entry key 'type' must be "
'one of file, galaxy, git, dir, subdirs, or url.'
)
if req_name is None and req_type == 'galaxy':
raise AnsibleError(
'Collections requirement entry should contain '
"the key 'name' if it's requested from a Galaxy-like "
'index server.',
)
if req_type != 'galaxy' and req_source is None:
req_source, req_name = req_name, None
if (
req_type == 'galaxy' and
isinstance(req_source, GalaxyAPI) and
not _is_http_url(req_source.api_server)
):
raise AnsibleError(
"Collections requirement 'source' entry should contain "
'a valid Galaxy API URL but it does not: {not_url!s} '
'is not an HTTP URL.'.
format(not_url=req_source.api_server),
)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
if req_type not in {'galaxy', 'subdirs'} and req_version == '*':
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
return (
'<{self!s} of type {coll_type!r} from {src!s}>'.
format(self=self, coll_type=self.type, src=self.src or 'Galaxy')
)
def __str__(self):
return to_native(self.__unicode__())
def __unicode__(self):
if self.fqcn is None:
return (
u'"virtual collection Git repo"' if self.is_scm
else u'"virtual collection namespace"'
)
return (
u'{fqcn!s}:{ver!s}'.
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_collection_path):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(b_collection_path, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_metadata_dir = to_bytes(os.path.sep).join(b_path_parts)
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@property
def namespace(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a namespace')
return self._get_separate_ns_n_name()[0]
@property
def name(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a name')
return self._get_separate_ns_n_name()[-1]
@property
def canonical_package_id(self):
if not self.is_virtual:
return to_native(self.fqcn)
return (
'<virtual namespace from {src!s} of type {src_type!s}>'.
format(src=to_native(self.src), src_type=to_native(self.type))
)
@property
def is_virtual(self):
return self.is_scm or self.is_subdirs
@property
def is_file(self):
return self.type == 'file'
@property
def is_dir(self):
return self.type == 'dir'
@property
def namespace_collection_paths(self):
return [
to_native(path)
for path in _find_collections_in_subdirs(self.src)
]
@property
def is_subdirs(self):
return self.type == 'subdirs'
@property
def is_url(self):
return self.type == 'url'
@property
def is_scm(self):
return self.type == 'git'
@property
def is_concrete_artifact(self):
return self.type in {'git', 'url', 'file', 'dir', 'subdirs'}
@property
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources')) # type: ignore[name-match]
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures')) # type: ignore[name-match]
class Requirement(
_ComputedReqKindsMixin,
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()
def with_signatures_repopulated(self): # type: (Candidate) -> Candidate
"""Populate a new Candidate instance with Galaxy signatures.
:raises AnsibleAssertionError: If the supplied candidate is not sourced from a Galaxy-like index.
"""
if self.type != 'galaxy':
raise AnsibleAssertionError(f"Invalid collection type for {self!r}: unable to get signatures from a galaxy server.")
signatures = self.src.get_collection_signatures(self.namespace, self.name, self.ver)
return self.__class__(self.fqcn, self.ver, self.src, self.type, frozenset([*self.signatures, *signatures]))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,803 |
ansible-galaxy collection install of local collection fails if collection directory ends with '/'
|
### Summary
When trying to install a collection from a local directory, it fails if the directory has a trailing slash (/). If the trailing slash is removed, the installation succeeds.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Ubuntu 20.04
Python 3.10.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-galaxy collection install -vvvvvv -f -p ~/devcollections ansible-collection-dir/
```
### Expected Results
I expected the collection to be installed.
### Actual Results
The relevant part of the output is "b'ansible-collection-dir/lugins/README.md'". As can be seen, the "plugins" directory name is being truncated by one character.
```console
ansible-galaxy [core 2.14.0.dev0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/redacted/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible
ansible collection location = /home/redacted/.ansible/collections:/usr/share/ansible/collections
executable location = /home/redacted/.virtualenvs/dev-ansible/bin/ansible-galaxy
python version = 3.10.4 (main, Apr 8 2022, 17:35:13) [GCC 9.4.0] (/home/redacted/.virtualenvs/dev-ansible/bin/python3)
jinja version = 3.1.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/home/redacted/devcollections' is not part of the configured Ansible collections paths '/home/redacted/.ansible/collections:/usr/share/ansible/collections'. The installed collection will not be picked up in an Ansible run, unless
within a playbook-adjacent collections directory.
Process install dependency map
Starting collection install process
Installing 'mynamespace.mycollection:1.0.0' to '/home/redacted/devcollections/ansible_collections/mynamespace.mycollection'
Skipping 'ansible-collection-dir/.git' for collection build
Skipping 'ansible-collection-dir/galaxy.yml' for collection build
ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
the full traceback was:
Traceback (most recent call last):
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/__init__.py", line 623, in cli_executor
exit_code = cli.run()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 646, in run
return context.CLIARGS['func']()
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 102, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1300, in execute_install
self._execute_install_collection(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/cli/galaxy.py", line 1328, in _execute_install_collection
install_collections(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 719, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1276, in install
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1410, in install_src
collection_output_path = _build_collection_dir(
File "/home/redacted/.virtualenvs/dev-ansible/lib/python3.10/site-packages/ansible/galaxy/collection/__init__.py", line 1205, in _build_collection_dir
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
FileNotFoundError: [Errno 2] No such file or directory: b'ansible-collection-dir/lugins/README.md'
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77803
|
https://github.com/ansible/ansible/pull/79110
|
676b731e6f7d60ce6fd48c0d1c883fc85f5c6537
|
964e678a7fa3b0745f9302e7a3682851089d09d2
| 2022-05-13T18:29:08Z |
python
| 2023-04-17T19:24:55Z |
test/integration/targets/ansible-galaxy-collection/tasks/install.yml
|
---
- name: create test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: directory
- name: install simple collection from first accessible server
command: ansible-galaxy collection install namespace1.name1 -vvvv
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- 'from_first_good_server.stdout|regex_findall("has not signed namespace1\.name1")|length == 1'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install simple collection with implicit path - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_normal
- name: get installed files of install simple collection with implicit path - {{ test_id }}
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection with implicit path - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection with implicit path - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_normal.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: install existing without --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_no_force
- name: assert install existing without --force - {{ test_id }}
assert:
that:
- '"Nothing to do. All requested collections are already installed" in install_existing_no_force.stdout'
- name: install existing with --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' --force {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_force
- name: assert install existing with --force - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_existing_force.stdout'
- name: remove test installed collection - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install pre-release as explicit version to custom dir - {{ test_id }}
command: ansible-galaxy collection install 'namespace1.name1:1.1.0-beta.1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release as explicit version to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release as explicit version to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: Remove beta
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
- name: install pre-release version with --pre to custom dir - {{ test_id }}
command: ansible-galaxy collection install --pre 'namespace1.name1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release version with --pre to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release version with --pre to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: install multiple collections with dependencies - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection:1.0.0 namespace2.name -s {{ test_name }} {{ galaxy_verbosity }}
args:
chdir: '{{ galaxy_dir }}/ansible_collections'
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
register: install_multiple_with_dep
- name: get result of install multiple collections with dependencies - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_multiple_with_dep_actual
loop_control:
loop_var: collection
loop:
- namespace: namespace2
name: name
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert install multiple collections with dependencies - {{ test_id }}
assert:
that:
- (install_multiple_with_dep_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[2].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_multiple_with_dep_actual.results[3].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: expect failure with dep resolution failure - {{ test_id }}
command: ansible-galaxy collection install fail_namespace.fail_collection -s {{ test_name }} {{ galaxy_verbosity }}
register: fail_dep_mismatch
failed_when:
- '"Could not satisfy the following requirements" not in fail_dep_mismatch.stderr'
- '" fail_dep2.name:<0.0.5 (dependency of fail_namespace.fail_collection:2.1.2)" not in fail_dep_mismatch.stderr'
- name: Find artifact url for namespace3.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace3/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: download a collection for an offline install - {{ test_id }}
get_url:
url: '{{ artifact_url_response.json.download_url }}'
dest: '{{ galaxy_dir }}/namespace3.tar.gz'
- name: install a collection from a tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/namespace3.tar.gz' {{ galaxy_verbosity }}
register: install_tarball
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a tarball - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace3/name/MANIFEST.json'
register: install_tarball_actual
- name: assert install a collection from a tarball - {{ test_id }}
assert:
that:
- '"Installing ''namespace3.name:1.0.0'' to" in install_tarball.stdout'
- (install_tarball_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: write a requirements file using the artifact and a conflicting version
copy:
content: |
collections:
- name: {{ galaxy_dir }}/namespace3.tar.gz
version: 1.2.0
dest: '{{ galaxy_dir }}/test_req.yml'
- name: install the requirements file with mismatched versions
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/test_req.yml' {{ galaxy_verbosity }}
ignore_errors: True
register: result
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: remove the requirements file
file:
path: '{{ galaxy_dir }}/test_req.yml'
state: absent
- assert:
that: error == expected_error
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (direct request)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
- name: test error for mismatched dependency versions
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (dependency of tmp_parent.name:1.0.0)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
block:
- name: init a new parent collection
command: ansible-galaxy collection init tmp_parent.name --init-path '{{ galaxy_dir }}/scratch'
- name: replace the dependencies
lineinfile:
path: "{{ galaxy_dir }}/scratch/tmp_parent/name/galaxy.yml"
regexp: "^dependencies:*"
line: "dependencies: { '{{ galaxy_dir }}/namespace3.tar.gz': '1.2.0' }"
- name: build the new artifact
command: ansible-galaxy collection build {{ galaxy_dir }}/scratch/tmp_parent/name
args:
chdir: "{{ galaxy_dir }}"
- name: install the artifact to verify the error is handled
command: ansible-galaxy collection install '{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz'
ignore_errors: yes
register: result
- debug: msg="Actual - {{ error }}"
- debug: msg="Expected - {{ expected_error }}"
- assert:
that: error == expected_error
always:
- name: clean up collection skeleton and artifact
file:
state: absent
path: "{{ item }}"
loop:
- "{{ galaxy_dir }}/scratch/tmp_parent/"
- "{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz"
- name: setup bad tarball - {{ test_id }}
script: build_bad_tar.py {{ galaxy_dir | quote }}
- name: fail to install a collection from a bad tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/suspicious-test-1.0.0.tar.gz' {{ galaxy_verbosity }}
register: fail_bad_tar
failed_when: fail_bad_tar.rc != 1 and "Cannot extract tar entry '../../outside.sh' as it will be placed outside the collection directory" not in fail_bad_tar.stderr
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of failed collection install - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections\suspicious'
register: fail_bad_tar_actual
- name: assert result of failed collection install - {{ test_id }}
assert:
that:
- not fail_bad_tar_actual.stat.exists
- name: Find artifact url for namespace4.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace4/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: install a collection from a URI - {{ test_id }}
command: ansible-galaxy collection install {{ artifact_url_response.json.download_url}} {{ galaxy_verbosity }}
register: install_uri
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a URI - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace4/name/MANIFEST.json'
register: install_uri_actual
- name: assert install a collection from a URI - {{ test_id }}
assert:
that:
- '"Installing ''namespace4.name:1.0.0'' to" in install_uri.stdout'
- (install_uri_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: fail to install a collection with an undefined URL - {{ test_id }}
command: ansible-galaxy collection install namespace5.name {{ galaxy_verbosity }}
register: fail_undefined_server
failed_when: '"No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined" not in fail_undefined_server.stderr'
environment:
ANSIBLE_GALAXY_SERVER_LIST: undefined
- when: not requires_auth
block:
- name: install a collection with an empty server list - {{ test_id }}
command: ansible-galaxy collection install namespace5.name -s '{{ test_server }}' {{ galaxy_verbosity }}
register: install_empty_server_list
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_SERVER_LIST: ''
- name: get result of a collection with an empty server list - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace5/name/MANIFEST.json'
register: install_empty_server_list_actual
- name: assert install a collection with an empty server list - {{ test_id }}
assert:
that:
- '"Installing ''namespace5.name:1.0.0'' to" in install_empty_server_list.stdout'
- (install_empty_server_list_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with both roles and collections - {{ test_id }}
copy:
content: |
collections:
- namespace6.name
- name: namespace7.name
roles:
- skip.me
dest: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
- name: install roles from requirements file with collection-only keyring option
command: ansible-galaxy role install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }}
vars:
req_file: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_opt
- assert:
that:
- invalid_opt is failed
- "'unrecognized arguments: --keyring' in invalid_opt.stderr"
# Need to run with -vvv to validate the roles will be skipped msg
- name: install collections only with requirements-with-role.yml - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml' -s '{{ test_name }}' -vvv
register: install_req_collection
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections only with requirements-with-roles.yml - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_collection_actual
loop_control:
loop_var: collection
loop:
- namespace6
- namespace7
- name: assert install collections only with requirements-with-role.yml - {{ test_id }}
assert:
that:
- '"contains roles which will be ignored" in install_req_collection.stdout'
- '"Installing ''namespace6.name:1.0.0'' to" in install_req_collection.stdout'
- '"Installing ''namespace7.name:1.0.0'' to" in install_req_collection.stdout'
- (install_req_collection_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_collection_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with just collections - {{ test_id }}
copy:
content: |
collections:
- namespace8.name
- name: namespace9.name
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
- name: install collections with ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}'
register: install_req
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: Test deviations on -r and --role-file without collection or role sub command
command: '{{ cmd }}'
loop:
- ansible-galaxy install -vr '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vv
- ansible-galaxy install --role-file '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
- ansible-galaxy install --role-file='{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
loop_control:
loop_var: cmd
- name: uninstall collections for next requirements file test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: rewrite requirements file with collections and signatures
copy:
content: |
collections:
- name: namespace7.name
version: "1.0.0"
signatures:
- "{{ not_mine }}"
- "{{ also_not_mine }}"
- "file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
- namespace8.name
- name: namespace9.name
signatures:
- "file://{{ gpg_homedir }}/namespace9-name-1.0.0-MANIFEST.json.asc"
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
vars:
not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
also_not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
- name: installing only roles does not fail if keyring for collections is not provided
command: ansible-galaxy role install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml
register: roles_only
- assert:
that:
- roles_only is success
- name: installing only roles implicitly does not fail if keyring for collections is not provided
# if -p/--roles-path are specified, only roles are installed
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }} -p {{ galaxy_dir }}
register: roles_only
- assert:
that:
- roles_only is success
- name: installing roles and collections requires keyring if collections have signatures
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }}
ignore_errors: yes
register: collections_and_roles
- assert:
that:
- collections_and_roles is failed
- "'no keyring was configured' in collections_and_roles.stderr"
- name: install collection with mutually exclusive options
command: ansible-galaxy collection install -r {{ req_file }} -s {{ test_name }} {{ cli_signature }}
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
# --signature is an ansible-galaxy collection install subcommand, but mutually exclusive with -r
cli_signature: "--signature file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: mutually_exclusive_opts
- assert:
that:
- mutually_exclusive_opts is failed
- expected_error in actual_error
vars:
expected_error: >-
The --signatures option and --requirements-file are mutually exclusive.
Use the --signatures with positional collection_name args or provide a
'signatures' key for requirements in the --requirements-file.
actual_error: "{{ mutually_exclusive_opts.stderr }}"
- name: install a collection with user-supplied signatures for verification but no keyring
command: ansible-galaxy collection install namespace1.name1:1.0.0 {{ cli_signature }}
vars:
cli_signature: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: required_together
- assert:
that:
- required_together is failed
- '"ERROR! Signatures were provided to verify namespace1.name1 but no keyring was configured." in required_together.stderr'
- name: install collections with ansible-galaxy install -r with invalid signatures - {{ test_id }}
# Note that --keyring is a valid option for 'ansible-galaxy install -r ...', not just 'ansible-galaxy collection ...'
command: ansible-galaxy install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }} {{ galaxy_verbosity }}
register: install_req
ignore_errors: yes
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
- name: assert invalid signature is fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is failed
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed" in install_req.stderr'
# The other collections shouldn't be installed because they're listed
# after the failing collection and --ignore-errors was not provided
- '"Installing ''namespace8.name:1.0.0'' to" not in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" not in install_req.stdout'
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collections with ansible-galaxy install and --ignore-errors - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} -vvvv
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }} --ignore-errors"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
# SIVEL
- name: assert invalid signature is not fatal with ansible-galaxy install --ignore-errors - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." in install_stderr'
- '"Failed to install collection namespace7.name:1.0.0 but skipping due to --ignore-errors being set." in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: install collections with only one valid signature using ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }}
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert just one valid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: install collections with only one valid signature by ignoring the other errors
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }} --ignore-signature-status-code FAILURE
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES: BADSIG # cli option is appended and both status codes are ignored
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert invalid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
# Uncomment once pulp container is at pulp>=0.5.0
#- name: install cache.cache at the current latest version
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' -vvv
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- set_fact:
# cache_version_build: '{{ (cache_version_build | int) + 1 }}'
#
#- name: publish update for cache.cache test
# setup_collections:
# server: galaxy_ng
# collections:
# - namespace: cache
# name: cache
# version: 1.0.{{ cache_version_build }}
#
#- name: make sure the cache version list is ignored on a collection version change - {{ test_id }}
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' --force -vvv
# register: install_cached_update
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- name: get result of cache version list is ignored on a collection version change - {{ test_id }}
# slurp:
# path: '{{ galaxy_dir }}/ansible_collections/cache/cache/MANIFEST.json'
# register: install_cached_update_actual
#
#- name: assert cache version list is ignored on a collection version change - {{ test_id }}
# assert:
# that:
# - '"Installing ''cache.cache:1.0.{{ cache_version_build }}'' to" in install_cached_update.stdout'
# - (install_cached_update_actual.content | b64decode | from_json).collection_info.version == '1.0.' ~ cache_version_build
- name: install collection with symlink - {{ test_id }}
command: ansible-galaxy collection install symlink.symlink -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink
- find:
paths: '{{ galaxy_dir }}/ansible_collections/symlink/symlink'
recurse: yes
file_type: any
- name: get result of install collection with symlink - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink/symlink/{{ path }}'
register: install_symlink_actual
loop_control:
loop_var: path
loop:
- REΓ
DMΓ.md-link
- docs/REΓ
DMΓ.md
- plugins/REΓ
DMΓ.md
- REΓ
DMΓ.md-outside-link
- docs-link
- docs-link/REΓ
DMΓ.md
- name: assert install collection with symlink - {{ test_id }}
assert:
that:
- '"Installing ''symlink.symlink:1.0.0'' to" in install_symlink.stdout'
- install_symlink_actual.results[0].stat.islnk
- install_symlink_actual.results[0].stat.lnk_target == 'REΓ
DMΓ.md'
- install_symlink_actual.results[1].stat.islnk
- install_symlink_actual.results[1].stat.lnk_target == '../REΓ
DMΓ.md'
- install_symlink_actual.results[2].stat.islnk
- install_symlink_actual.results[2].stat.lnk_target == '../REΓ
DMΓ.md'
- install_symlink_actual.results[3].stat.isreg
- install_symlink_actual.results[4].stat.islnk
- install_symlink_actual.results[4].stat.lnk_target == 'docs'
- install_symlink_actual.results[5].stat.islnk
- install_symlink_actual.results[5].stat.lnk_target == '../REΓ
DMΓ.md'
# Testing an install from source to check that symlinks to directories
# are preserved (see issue https://github.com/ansible/ansible/issues/78442)
- name: symlink_dirs collection install from source test
block:
- name: create symlink_dirs collection
command: ansible-galaxy collection init symlink_dirs.symlink_dirs --init-path "{{ galaxy_dir }}/scratch"
- name: create directory in collection
file:
path: "{{ galaxy_dir }}/scratch/symlink_dirs/symlink_dirs/folderA"
state: directory
- name: create symlink to folderA
file:
dest: "{{ galaxy_dir }}/scratch/symlink_dirs/symlink_dirs/folderB"
src: ./folderA
state: link
force: yes
- name: install symlink_dirs collection from source
command: ansible-galaxy collection install {{ galaxy_dir }}/scratch/symlink_dirs/
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink_dirs
- name: get result of install collection with symlink_dirs - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink_dirs/symlink_dirs/{{ path }}'
register: install_symlink_dirs_actual
loop_control:
loop_var: path
loop:
- folderA
- folderB
- name: assert install collection with symlink_dirs - {{ test_id }}
assert:
that:
- '"Installing ''symlink_dirs.symlink_dirs:1.0.0'' to" in install_symlink_dirs.stdout'
- install_symlink_dirs_actual.results[0].stat.isdir
- install_symlink_dirs_actual.results[1].stat.islnk
- install_symlink_dirs_actual.results[1].stat.lnk_target == './folderA'
always:
- name: clean up symlink_dirs collection directory
file:
path: "{{ galaxy_dir }}/scratch/symlink_dirs"
state: absent
- name: remove install directory for the next test because parent_dep.parent_collection was installed - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
- name: install collection and dep compatible with multiple requirements - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection parent_dep2.parent_collection
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''parent_dep.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''parent_dep2.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''child_dep.child_collection:0.5.0'' to" in install_req.stdout'
- name: install a collection to a directory that contains another collection with no metadata
block:
# Collections are usable in ansible without a galaxy.yml or MANIFEST.json
- name: create a collection directory
file:
state: directory
path: '{{ galaxy_dir }}/ansible_collections/unrelated_namespace/collection_without_metadata/plugins'
- name: install a collection to the same installation directory - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert installed collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_req.stdout'
- name: remove test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collection with signature with invalid keyring
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_option }} {{ keyring_option }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_option: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring_option: '--keyring {{ gpg_homedir }}/i_do_not_exist.kbx'
ignore_errors: yes
register: keyring_error
- assert:
that:
- keyring_error is failed
- expected_errors[0] in actual_error
- expected_errors[1] in actual_error
- expected_errors[2] in actual_error
- unexpected_warning not in actual_warning
vars:
keyring: "{{ gpg_homedir }}/i_do_not_exist.kbx"
expected_errors:
- "Signature verification failed for 'namespace1.name1' (return code 2):"
- "* The public key is not available."
- >-
* It was not possible to check the signature. This may be caused
by a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public key.
unexpected_warning: >-
The GnuPG keyring used for collection signature
verification was not configured but signatures were
provided by the Galaxy server to verify authenticity.
Configure a keyring for ansible-galaxy to use
or disable signature verification.
Skipping signature verification.
actual_warning: "{{ keyring_error.stderr | regex_replace('\\n', ' ') }}"
# Remove formatting from the reason so it's one line
actual_error: "{{ keyring_error.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
# TODO: Uncomment once signatures are provided by pulp-galaxy-ng
#- name: install collection with signature provided by Galaxy server (no keyring)
# command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
# ANSIBLE_NOCOLOR: True
# ANSIBLE_FORCE_COLOR: False
# ignore_errors: yes
# register: keyring_warning
#
#- name: assert a warning was given but signature verification did not occur without configuring the keyring
# assert:
# that:
# - keyring_warning is not failed
# - - '"Installing ''namespace1.name1:1.0.9'' to" in keyring_warning.stdout'
# # TODO: Don't just check the stdout, make sure the collection was installed.
# - expected_warning in actual_warning
# vars:
# expected_warning: >-
# The GnuPG keyring used for collection signature
# verification was not configured but signatures were
# provided by the Galaxy server to verify authenticity.
# Configure a keyring for ansible-galaxy to use
# or disable signature verification.
# Skipping signature verification.
# actual_warning: "{{ keyring_warning.stderr | regex_replace('\\n', ' ') }}"
- name: install simple collection from first accessible server with valid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_signature
- assert:
that:
- invalid_signature is failed
- "'Not installing namespace1.name1 because GnuPG signature verification failed.' in invalid_signature.stderr"
- expected_errors[0] in install_stdout
- expected_errors[1] in install_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
# Remove formatting from the reason so it's one line
install_stdout: "{{ invalid_signature.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
- name: validate collection directory was not created
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
register: collection_dir
check_mode: yes
failed_when: collection_dir is changed
- name: disable signature verification and install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }} --disable-gpg-verify"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: ignore_invalid_signature
- assert:
that:
- ignore_invalid_signature is success
- '"Installing ''namespace1.name1:1.0.9'' to" in ignore_invalid_signature.stdout'
- name: use lenient signature verification (default) without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "all"
register: missing_signature
- assert:
that:
- missing_signature is success
- missing_signature.rc == 0
- '"namespace1.name1:1.0.0 was installed successfully" in missing_signature.stdout'
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" not in missing_signature.stdout'
- name: use strict signature verification without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "+1"
ignore_errors: yes
register: missing_signature
- assert:
that:
- missing_signature is failed
- missing_signature.rc == 1
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" in missing_signature.stdout'
- '"Not installing namespace1.name1 because GnuPG signature verification failed" in missing_signature.stderr'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: download collections with pre-release dep - {{ test_id }}
command: ansible-galaxy collection download dep_with_beta.parent namespace1.name1:1.1.0-beta.1 -p '{{ galaxy_dir }}/scratch'
- name: install collection with concrete pre-release dep - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/scratch/requirements.yml'
args:
chdir: '{{ galaxy_dir }}/scratch'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_concrete_pre
- name: get result of install collections with concrete pre-release dep - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/MANIFEST.json'
register: install_concrete_pre_actual
loop_control:
loop_var: collection
loop:
- namespace1/name1
- dep_with_beta/parent
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_concrete_pre.stdout'
- '"Installing ''dep_with_beta.parent:1.0.0'' to" in install_concrete_pre.stdout'
- (install_concrete_pre_actual.results[0].content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- (install_concrete_pre_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: remove collection dir after round of testing - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
changelogs/fragments/79677-fix-argspec-type-check.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
lib/ansible/module_utils/common/parameters.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import os
from collections import deque
from itertools import chain
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.warnings import warn
from ansible.module_utils.errors import (
AliasError,
AnsibleFallbackNotFound,
AnsibleValidationErrorMultiple,
ArgumentTypeError,
ArgumentValueError,
ElementError,
MutuallyExclusiveError,
NoLogError,
RequiredByError,
RequiredError,
RequiredIfError,
RequiredOneOfError,
RequiredTogetherError,
SubParameterTypeError,
)
from ansible.module_utils.parsing.convert_bool import BOOLEANS_FALSE, BOOLEANS_TRUE
from ansible.module_utils.six.moves.collections_abc import (
KeysView,
Set,
Sequence,
Mapping,
MutableMapping,
MutableSet,
MutableSequence,
)
from ansible.module_utils.six import (
binary_type,
integer_types,
string_types,
text_type,
PY2,
PY3,
)
from ansible.module_utils.common.validation import (
check_mutually_exclusive,
check_required_arguments,
check_required_together,
check_required_one_of,
check_required_if,
check_required_by,
check_type_bits,
check_type_bool,
check_type_bytes,
check_type_dict,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_path,
check_type_raw,
check_type_str,
)
# Python2 & 3 way to get NoneType
NoneType = type(None)
_ADDITIONAL_CHECKS = (
{'func': check_required_together, 'attr': 'required_together', 'err': RequiredTogetherError},
{'func': check_required_one_of, 'attr': 'required_one_of', 'err': RequiredOneOfError},
{'func': check_required_if, 'attr': 'required_if', 'err': RequiredIfError},
{'func': check_required_by, 'attr': 'required_by', 'err': RequiredByError},
)
# if adding boolean attribute, also add to PASS_BOOL
# some of this dupes defaults from controller config
PASS_VARS = {
'check_mode': ('check_mode', False),
'debug': ('_debug', False),
'diff': ('_diff', False),
'keep_remote_files': ('_keep_remote_files', False),
'module_name': ('_name', None),
'no_log': ('no_log', False),
'remote_tmp': ('_remote_tmp', None),
'selinux_special_fs': ('_selinux_special_fs', ['fuse', 'nfs', 'vboxsf', 'ramfs', '9p', 'vfat']),
'shell_executable': ('_shell', '/bin/sh'),
'socket': ('_socket_path', None),
'string_conversion_action': ('_string_conversion_action', 'warn'),
'syslog_facility': ('_syslog_facility', 'INFO'),
'tmpdir': ('_tmpdir', None),
'verbosity': ('_verbosity', 0),
'version': ('ansible_version', '0.0'),
}
PASS_BOOLS = ('check_mode', 'debug', 'diff', 'keep_remote_files', 'no_log')
DEFAULT_TYPE_VALIDATORS = {
'str': check_type_str,
'list': check_type_list,
'dict': check_type_dict,
'bool': check_type_bool,
'int': check_type_int,
'float': check_type_float,
'path': check_type_path,
'raw': check_type_raw,
'jsonarg': check_type_jsonarg,
'json': check_type_jsonarg,
'bytes': check_type_bytes,
'bits': check_type_bits,
}
def _get_type_validator(wanted):
"""Returns the callable used to validate a wanted type and the type name.
:arg wanted: String or callable. If a string, get the corresponding
validation function from DEFAULT_TYPE_VALIDATORS. If callable,
get the name of the custom callable and return that for the type_checker.
:returns: Tuple of callable function or None, and a string that is the name
of the wanted type.
"""
# Use one of our builtin validators.
if not callable(wanted):
if wanted is None:
# Default type for parameters
wanted = 'str'
type_checker = DEFAULT_TYPE_VALIDATORS.get(wanted)
# Use the custom callable for validation.
else:
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _get_legal_inputs(argument_spec, parameters, aliases=None):
if aliases is None:
aliases = _handle_aliases(argument_spec, parameters)
return list(aliases.keys()) + list(argument_spec.keys())
def _get_unsupported_parameters(argument_spec, parameters, legal_inputs=None, options_context=None, store_supported=None):
"""Check keys in parameters against those provided in legal_inputs
to ensure they contain legal values. If legal_inputs are not supplied,
they will be generated using the argument_spec.
:arg argument_spec: Dictionary of parameters, their type, and valid values.
:arg parameters: Dictionary of parameters.
:arg legal_inputs: List of valid key names property names. Overrides values
in argument_spec.
:arg options_context: List of parent keys for tracking the context of where
a parameter is defined.
:returns: Set of unsupported parameters. Empty set if no unsupported parameters
are found.
"""
if legal_inputs is None:
legal_inputs = _get_legal_inputs(argument_spec, parameters)
unsupported_parameters = set()
for k in parameters.keys():
if k not in legal_inputs:
context = k
if options_context:
context = tuple(options_context + [k])
unsupported_parameters.add(context)
if store_supported is not None:
supported_aliases = _handle_aliases(argument_spec, parameters)
supported_params = []
for option in legal_inputs:
if option in supported_aliases:
continue
supported_params.append(option)
store_supported.update({context: (supported_params, supported_aliases)})
return unsupported_parameters
def _handle_aliases(argument_spec, parameters, alias_warnings=None, alias_deprecations=None):
"""Process aliases from an argument_spec including warnings and deprecations.
Modify ``parameters`` by adding a new key for each alias with the supplied
value from ``parameters``.
If a list is provided to the alias_warnings parameter, it will be filled with tuples
(option, alias) in every case where both an option and its alias are specified.
If a list is provided to alias_deprecations, it will be populated with dictionaries,
each containing deprecation information for each alias found in argument_spec.
:param argument_spec: Dictionary of parameters, their type, and valid values.
:type argument_spec: dict
:param parameters: Dictionary of parameters.
:type parameters: dict
:param alias_warnings:
:type alias_warnings: list
:param alias_deprecations:
:type alias_deprecations: list
"""
aliases_results = {} # alias:canon
for (k, v) in argument_spec.items():
aliases = v.get('aliases', None)
default = v.get('default', None)
required = v.get('required', False)
if alias_deprecations is not None:
for alias in argument_spec[k].get('deprecated_aliases', []):
if alias.get('name') in parameters:
alias_deprecations.append(alias)
if default is not None and required:
# not alias specific but this is a good place to check this
raise ValueError("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if not is_iterable(aliases) or isinstance(aliases, (binary_type, text_type)):
raise TypeError('internal error: aliases must be a list or tuple')
for alias in aliases:
aliases_results[alias] = k
if alias in parameters:
if k in parameters and alias_warnings is not None:
alias_warnings.append((k, alias))
parameters[k] = parameters[alias]
return aliases_results
def _list_deprecations(argument_spec, parameters, prefix=''):
"""Return a list of deprecations
:arg argument_spec: An argument spec dictionary
:arg parameters: Dictionary of parameters
:returns: List of dictionaries containing a message and version in which
the deprecated parameter will be removed, or an empty list.
:Example return:
.. code-block:: python
[
{
'msg': "Param 'deptest' is deprecated. See the module docs for more information",
'version': '2.9'
}
]
"""
deprecations = []
for arg_name, arg_opts in argument_spec.items():
if arg_name in parameters:
if prefix:
sub_prefix = '%s["%s"]' % (prefix, arg_name)
else:
sub_prefix = arg_name
if arg_opts.get('removed_at_date') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'date': arg_opts.get('removed_at_date'),
'collection_name': arg_opts.get('removed_from_collection'),
})
elif arg_opts.get('removed_in_version') is not None:
deprecations.append({
'msg': "Param '%s' is deprecated. See the module docs for more information" % sub_prefix,
'version': arg_opts.get('removed_in_version'),
'collection_name': arg_opts.get('removed_from_collection'),
})
# Check sub-argument spec
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
sub_arguments = parameters[arg_name]
if isinstance(sub_arguments, Mapping):
sub_arguments = [sub_arguments]
if isinstance(sub_arguments, list):
for sub_params in sub_arguments:
if isinstance(sub_params, Mapping):
deprecations.extend(_list_deprecations(sub_argument_spec, sub_params, prefix=sub_prefix))
return deprecations
def _list_no_log_values(argument_spec, params):
"""Return set of no log values
:arg argument_spec: An argument spec dictionary
:arg params: Dictionary of all parameters
:returns: :class:`set` of strings that should be hidden from output:
"""
no_log_values = set()
for arg_name, arg_opts in argument_spec.items():
if arg_opts.get('no_log', False):
# Find the value for the no_log'd param
no_log_object = params.get(arg_name, None)
if no_log_object:
try:
no_log_values.update(_return_datastructure_name(no_log_object))
except TypeError as e:
raise TypeError('Failed to convert "%s": %s' % (arg_name, to_native(e)))
# Get no_log values from suboptions
sub_argument_spec = arg_opts.get('options')
if sub_argument_spec is not None:
wanted_type = arg_opts.get('type')
sub_parameters = params.get(arg_name)
if sub_parameters is not None:
if wanted_type == 'dict' or (wanted_type == 'list' and arg_opts.get('elements', '') == 'dict'):
# Sub parameters can be a dict or list of dicts. Ensure parameters are always a list.
if not isinstance(sub_parameters, list):
sub_parameters = [sub_parameters]
for sub_param in sub_parameters:
# Validate dict fields in case they came in as strings
if isinstance(sub_param, string_types):
sub_param = check_type_dict(sub_param)
if not isinstance(sub_param, Mapping):
raise TypeError("Value '{1}' in the sub parameter field '{0}' must by a {2}, "
"not '{1.__class__.__name__}'".format(arg_name, sub_param, wanted_type))
no_log_values.update(_list_no_log_values(sub_argument_spec, sub_param))
return no_log_values
def _return_datastructure_name(obj):
""" Return native stringified values from datastructures.
For use with removing sensitive values pre-jsonification."""
if isinstance(obj, (text_type, binary_type)):
if obj:
yield to_native(obj, errors='surrogate_or_strict')
return
elif isinstance(obj, Mapping):
for element in obj.items():
for subelement in _return_datastructure_name(element[1]):
yield subelement
elif is_iterable(obj):
for element in obj:
for subelement in _return_datastructure_name(element):
yield subelement
elif obj is None or isinstance(obj, bool):
# This must come before int because bools are also ints
return
elif isinstance(obj, tuple(list(integer_types) + [float])):
yield to_native(obj, nonstring='simplerepr')
else:
raise TypeError('Unknown parameter type: %s' % (type(obj)))
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in ``no_log_strings`` are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def _set_defaults(argument_spec, parameters, set_default=True):
"""Set default values for parameters when no value is supplied.
Modifies parameters directly.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters to evaluate
:type parameters: dict
:kwarg set_default: Whether or not to set the default values
:type set_default: bool
:returns: Set of strings that should not be logged.
:rtype: set
"""
no_log_values = set()
for param, value in argument_spec.items():
# TODO: Change the default value from None to Sentinel to differentiate between
# user supplied None and a default value set by this function.
default = value.get('default', None)
# This prevents setting defaults on required items on the 1st run,
# otherwise will set things without a default to None on the 2nd.
if param not in parameters and (default is not None or set_default):
# Make sure any default value for no_log fields are masked.
if value.get('no_log', False) and default:
no_log_values.add(default)
parameters[param] = default
return no_log_values
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to :func:`sanitize_keys` to build ``deferred_removals`` and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def _validate_elements(wanted_type, parameter, values, options_context=None, errors=None):
if errors is None:
errors = AnsibleValidationErrorMultiple()
type_checker, wanted_element_type = _get_type_validator(wanted_type)
validated_parameters = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_element_type == 'str' and isinstance(wanted_type, string_types):
if isinstance(parameter, string_types):
kwargs['param'] = parameter
elif isinstance(parameter, dict):
kwargs['param'] = list(parameter.keys())[0]
for value in values:
try:
validated_parameters.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option '%s'" % parameter
if options_context:
msg += " found in '%s'" % " -> ".join(options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_element_type, to_native(e))
errors.append(ElementError(msg))
return validated_parameters
def _validate_argument_types(argument_spec, parameters, prefix='', options_context=None, errors=None):
"""Validate that parameter types match the type in the argument spec.
Determine the appropriate type checker function and run each
parameter value through that function. All error messages from type checker
functions are returned. If any parameter fails to validate, it will not
be in the returned parameters.
:arg argument_spec: Argument spec
:type argument_spec: dict
:arg parameters: Parameters
:type parameters: dict
:kwarg prefix: Name of the parent key that contains the spec. Used in the error message
:type prefix: str
:kwarg options_context: List of contexts?
:type options_context: list
:returns: Two item tuple containing validated and coerced parameters
and a list of any errors that were encountered.
:rtype: tuple
"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
if param not in parameters:
continue
value = parameters[param]
if value is None:
continue
wanted_type = spec.get('type')
type_checker, wanted_name = _get_type_validator(wanted_type)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted_type, string_types):
kwargs['param'] = list(parameters.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
parameters[param] = type_checker(value, **kwargs)
elements_wanted_type = spec.get('elements', None)
if elements_wanted_type:
elements = parameters[param]
if wanted_type != 'list' or not isinstance(elements, list):
msg = "Invalid type %s for option '%s'" % (wanted_name, elements)
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += ", elements value check is supported only with 'list' type"
errors.append(ArgumentTypeError(msg))
parameters[param] = _validate_elements(elements_wanted_type, param, elements, options_context, errors)
except (TypeError, ValueError) as e:
msg = "argument '%s' is of type %s" % (param, type(value))
if options_context:
msg += " found in '%s'." % " -> ".join(options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
errors.append(ArgumentTypeError(msg))
def _validate_argument_values(argument_spec, parameters, options_context=None, errors=None):
"""Ensure all arguments have the requested values, and there are no stray arguments"""
if errors is None:
errors = AnsibleValidationErrorMultiple()
for param, spec in argument_spec.items():
choices = spec.get('choices')
if choices is None:
continue
if isinstance(choices, (frozenset, KeysView, Sequence)) and not isinstance(choices, (binary_type, text_type)):
if param in parameters:
# Allow one or more when type='list' param with choices
if isinstance(parameters[param], list):
diff_list = [item for item in parameters[param] if item not in choices]
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
diff_str = ", ".join(diff_list)
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (param, choices_str, diff_str)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
elif parameters[param] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
if parameters[param] == 'False':
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(parameters[param],) = overlap
if parameters[param] == 'True':
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(parameters[param],) = overlap
if parameters[param] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (param, choices_str, parameters[param])
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentValueError(msg))
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (param, choices)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
errors.append(ArgumentTypeError(msg))
def _validate_sub_spec(
argument_spec,
parameters,
prefix="",
options_context=None,
errors=None,
no_log_values=None,
unsupported_parameters=None,
supported_parameters=None,
alias_deprecations=None,
):
"""Validate sub argument spec.
This function is recursive.
"""
if options_context is None:
options_context = []
if errors is None:
errors = AnsibleValidationErrorMultiple()
if no_log_values is None:
no_log_values = set()
if unsupported_parameters is None:
unsupported_parameters = set()
if supported_parameters is None:
supported_parameters = dict()
for param, value in argument_spec.items():
wanted = value.get('type')
if wanted == 'dict' or (wanted == 'list' and value.get('elements', '') == 'dict'):
sub_spec = value.get('options')
if value.get('apply_defaults', False):
if sub_spec is not None:
if parameters.get(param) is None:
parameters[param] = {}
else:
continue
elif sub_spec is None or param not in parameters or parameters[param] is None:
continue
# Keep track of context for warning messages
options_context.append(param)
# Make sure we can iterate over the elements
if not isinstance(parameters[param], Sequence) or isinstance(parameters[param], string_types):
elements = [parameters[param]]
else:
elements = parameters[param]
for idx, sub_parameters in enumerate(elements):
no_log_values.update(set_fallbacks(sub_spec, sub_parameters))
if not isinstance(sub_parameters, dict):
errors.append(SubParameterTypeError("value of '%s' must be of type dict or list of dicts" % param))
continue
# Set prefix for warning messages
new_prefix = prefix + param
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
alias_warnings = []
alias_deprecations_sub = []
try:
options_aliases = _handle_aliases(sub_spec, sub_parameters, alias_warnings, alias_deprecations_sub)
except (TypeError, ValueError) as e:
options_aliases = {}
errors.append(AliasError(to_native(e)))
for option, alias in alias_warnings:
warn('Both option %s%s and its alias %s%s are set.' % (new_prefix, option, new_prefix, alias))
if alias_deprecations is not None:
for deprecation in alias_deprecations_sub:
alias_deprecations.append({
'name': '%s%s' % (new_prefix, deprecation['name']),
'version': deprecation.get('version'),
'date': deprecation.get('date'),
'collection_name': deprecation.get('collection_name'),
})
try:
no_log_values.update(_list_no_log_values(sub_spec, sub_parameters))
except TypeError as te:
errors.append(NoLogError(to_native(te)))
legal_inputs = _get_legal_inputs(sub_spec, sub_parameters, options_aliases)
unsupported_parameters.update(
_get_unsupported_parameters(
sub_spec,
sub_parameters,
legal_inputs,
options_context,
store_supported=supported_parameters,
)
)
try:
check_mutually_exclusive(value.get('mutually_exclusive'), sub_parameters, options_context)
except TypeError as e:
errors.append(MutuallyExclusiveError(to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters, False))
try:
check_required_arguments(sub_spec, sub_parameters, options_context)
except TypeError as e:
errors.append(RequiredError(to_native(e)))
_validate_argument_types(sub_spec, sub_parameters, new_prefix, options_context, errors=errors)
_validate_argument_values(sub_spec, sub_parameters, options_context, errors=errors)
for check in _ADDITIONAL_CHECKS:
try:
check['func'](value.get(check['attr']), sub_parameters, options_context)
except TypeError as e:
errors.append(check['err'](to_native(e)))
no_log_values.update(_set_defaults(sub_spec, sub_parameters))
# Handle nested specs
_validate_sub_spec(
sub_spec, sub_parameters, new_prefix, options_context, errors, no_log_values,
unsupported_parameters, supported_parameters, alias_deprecations)
options_context.pop()
def env_fallback(*args, **kwargs):
"""Load value from environment variable"""
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def set_fallbacks(argument_spec, parameters):
no_log_values = set()
for param, value in argument_spec.items():
fallback = value.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if param not in parameters and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
fallback_value = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
else:
if value.get('no_log', False) and fallback_value:
no_log_values.add(fallback_value)
parameters[param] = fallback_value
return no_log_values
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
"""Sanitize the keys in a container object by removing ``no_log`` values from key names.
This is a companion function to the :func:`remove_values` function. Similar to that function,
we make use of ``deferred_removals`` to avoid hitting maximum recursion depth in cases of
large data structures.
:arg obj: The container object to sanitize. Non-container objects are returned unmodified.
:arg no_log_strings: A set of string values we do not want logged.
:kwarg ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def remove_values(value, no_log_strings):
"""Remove strings in ``no_log_strings`` from value.
If value is a container type, then remove a lot more.
Use of ``deferred_removals`` exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see `issue #24560 <https://github.com/ansible/ansible/issues/24560>`_).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
lib/ansible/module_utils/common/validation.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import re
from ast import literal_eval
from ansible.module_utils._text import to_native
from ansible.module_utils.common._json_compat import json
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.text.converters import jsonify
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import (
binary_type,
integer_types,
string_types,
text_type,
)
def count_terms(terms, parameters):
"""Count the number of occurrences of a key in a given dictionary
:arg terms: String or iterable of values to check
:arg parameters: Dictionary of parameters
:returns: An integer that is the number of occurrences of the terms values
in the provided dictionary.
"""
if not is_iterable(terms):
terms = [terms]
return len(set(terms).intersection(parameters))
def safe_eval(value, locals=None, include_exceptions=False):
# do not allow method calls to modules
if not isinstance(value, string_types):
# already templated to a datavaluestructure, perhaps?
if include_exceptions:
return (value, None)
return value
if re.search(r'\w\.\w+\(', value):
if include_exceptions:
return (value, None)
return value
# do not allow imports
if re.search(r'import \w+', value):
if include_exceptions:
return (value, None)
return value
try:
result = literal_eval(value)
if include_exceptions:
return (result, None)
else:
return result
except Exception as e:
if include_exceptions:
return (value, e)
return value
def check_mutually_exclusive(terms, parameters, options_context=None):
"""Check mutually exclusive terms against argument parameters
Accepts a single list or list of lists that are groups of terms that should be
mutually exclusive with one another
:arg terms: List of mutually exclusive parameters
:arg parameters: Dictionary of parameters
:kwarg options_context: List of strings of parent key names if ``terms`` are
in a sub spec.
:returns: Empty list or raises :class:`TypeError` if the check fails.
"""
results = []
if terms is None:
return results
for check in terms:
count = count_terms(check, parameters)
if count > 1:
results.append(check)
if results:
full_list = ['|'.join(check) for check in results]
msg = "parameters are mutually exclusive: %s" % ', '.join(full_list)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return results
def check_required_one_of(terms, parameters, options_context=None):
"""Check each list of terms to ensure at least one exists in the given module
parameters
Accepts a list of lists or tuples
:arg terms: List of lists of terms to check. For each list of terms, at
least one is required.
:arg parameters: Dictionary of parameters
:kwarg options_context: List of strings of parent key names if ``terms`` are
in a sub spec.
:returns: Empty list or raises :class:`TypeError` if the check fails.
"""
results = []
if terms is None:
return results
for term in terms:
count = count_terms(term, parameters)
if count == 0:
results.append(term)
if results:
for term in results:
msg = "one of the following is required: %s" % ', '.join(term)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return results
def check_required_together(terms, parameters, options_context=None):
"""Check each list of terms to ensure every parameter in each list exists
in the given parameters.
Accepts a list of lists or tuples.
:arg terms: List of lists of terms to check. Each list should include
parameters that are all required when at least one is specified
in the parameters.
:arg parameters: Dictionary of parameters
:kwarg options_context: List of strings of parent key names if ``terms`` are
in a sub spec.
:returns: Empty list or raises :class:`TypeError` if the check fails.
"""
results = []
if terms is None:
return results
for term in terms:
counts = [count_terms(field, parameters) for field in term]
non_zero = [c for c in counts if c > 0]
if len(non_zero) > 0:
if 0 in counts:
results.append(term)
if results:
for term in results:
msg = "parameters are required together: %s" % ', '.join(term)
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return results
def check_required_by(requirements, parameters, options_context=None):
"""For each key in requirements, check the corresponding list to see if they
exist in parameters.
Accepts a single string or list of values for each key.
:arg requirements: Dictionary of requirements
:arg parameters: Dictionary of parameters
:kwarg options_context: List of strings of parent key names if ``requirements`` are
in a sub spec.
:returns: Empty dictionary or raises :class:`TypeError` if the
"""
result = {}
if requirements is None:
return result
for (key, value) in requirements.items():
if key not in parameters or parameters[key] is None:
continue
result[key] = []
# Support strings (single-item lists)
if isinstance(value, string_types):
value = [value]
for required in value:
if required not in parameters or parameters[required] is None:
result[key].append(required)
if result:
for key, missing in result.items():
if len(missing) > 0:
msg = "missing parameter(s) required by '%s': %s" % (key, ', '.join(missing))
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return result
def check_required_arguments(argument_spec, parameters, options_context=None):
"""Check all parameters in argument_spec and return a list of parameters
that are required but not present in parameters.
Raises :class:`TypeError` if the check fails
:arg argument_spec: Argument spec dictionary containing all parameters
and their specification
:arg parameters: Dictionary of parameters
:kwarg options_context: List of strings of parent key names if ``argument_spec`` are
in a sub spec.
:returns: Empty list or raises :class:`TypeError` if the check fails.
"""
missing = []
if argument_spec is None:
return missing
for (k, v) in argument_spec.items():
required = v.get('required', False)
if required and k not in parameters:
missing.append(k)
if missing:
msg = "missing required arguments: %s" % ", ".join(sorted(missing))
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return missing
def check_required_if(requirements, parameters, options_context=None):
"""Check parameters that are conditionally required
Raises :class:`TypeError` if the check fails
:arg requirements: List of lists specifying a parameter, value, parameters
required when the given parameter is the specified value, and optionally
a boolean indicating any or all parameters are required.
:Example:
.. code-block:: python
required_if=[
['state', 'present', ('path',), True],
['someint', 99, ('bool_param', 'string_param')],
]
:arg parameters: Dictionary of parameters
:returns: Empty list or raises :class:`TypeError` if the check fails.
The results attribute of the exception contains a list of dictionaries.
Each dictionary is the result of evaluating each item in requirements.
Each return dictionary contains the following keys:
:key missing: List of parameters that are required but missing
:key requires: 'any' or 'all'
:key parameter: Parameter name that has the requirement
:key value: Original value of the parameter
:key requirements: Original required parameters
:Example:
.. code-block:: python
[
{
'parameter': 'someint',
'value': 99
'requirements': ('bool_param', 'string_param'),
'missing': ['string_param'],
'requires': 'all',
}
]
:kwarg options_context: List of strings of parent key names if ``requirements`` are
in a sub spec.
"""
results = []
if requirements is None:
return results
for req in requirements:
missing = {}
missing['missing'] = []
max_missing_count = 0
is_one_of = False
if len(req) == 4:
key, val, requirements, is_one_of = req
else:
key, val, requirements = req
# is_one_of is True at least one requirement should be
# present, else all requirements should be present.
if is_one_of:
max_missing_count = len(requirements)
missing['requires'] = 'any'
else:
missing['requires'] = 'all'
if key in parameters and parameters[key] == val:
for check in requirements:
count = count_terms(check, parameters)
if count == 0:
missing['missing'].append(check)
if len(missing['missing']) and len(missing['missing']) >= max_missing_count:
missing['parameter'] = key
missing['value'] = val
missing['requirements'] = requirements
results.append(missing)
if results:
for missing in results:
msg = "%s is %s but %s of the following are missing: %s" % (
missing['parameter'], missing['value'], missing['requires'], ', '.join(missing['missing']))
if options_context:
msg = "{0} found in {1}".format(msg, " -> ".join(options_context))
raise TypeError(to_native(msg))
return results
def check_missing_parameters(parameters, required_parameters=None):
"""This is for checking for required params when we can not check via
argspec because we need more information than is simply given in the argspec.
Raises :class:`TypeError` if any required parameters are missing
:arg parameters: Dictionary of parameters
:arg required_parameters: List of parameters to look for in the given parameters.
:returns: Empty list or raises :class:`TypeError` if the check fails.
"""
missing_params = []
if required_parameters is None:
return missing_params
for param in required_parameters:
if not parameters.get(param):
missing_params.append(param)
if missing_params:
msg = "missing required arguments: %s" % ', '.join(missing_params)
raise TypeError(to_native(msg))
return missing_params
# FIXME: The param and prefix parameters here are coming from AnsibleModule._check_type_string()
# which is using those for the warning messaged based on string conversion warning settings.
# Not sure how to deal with that here since we don't have config state to query.
def check_type_str(value, allow_conversion=True, param=None, prefix=''):
"""Verify that the value is a string or convert to a string.
Since unexpected changes can sometimes happen when converting to a string,
``allow_conversion`` controls whether or not the value will be converted or a
TypeError will be raised if the value is not a string and would be converted
:arg value: Value to validate or convert to a string
:arg allow_conversion: Whether to convert the string and return it or raise
a TypeError
:returns: Original value if it is a string, the value converted to a string
if allow_conversion=True, or raises a TypeError if allow_conversion=False.
"""
if isinstance(value, string_types):
return value
if allow_conversion:
return to_native(value, errors='surrogate_or_strict')
msg = "'{0!r}' is not a string and conversion is not allowed".format(value)
raise TypeError(to_native(msg))
def check_type_list(value):
"""Verify that the value is a list or convert to a list
A comma separated string will be split into a list. Raises a :class:`TypeError`
if unable to convert to a list.
:arg value: Value to validate or convert to a list
:returns: Original value if it is already a list, single item list if a
float, int, or string without commas, or a multi-item list if a
comma-delimited string.
"""
if isinstance(value, list):
return value
if isinstance(value, string_types):
return value.split(",")
elif isinstance(value, int) or isinstance(value, float):
return [str(value)]
raise TypeError('%s cannot be converted to a list' % type(value))
def check_type_dict(value):
"""Verify that value is a dict or convert it to a dict and return it.
Raises :class:`TypeError` if unable to convert to a dict
:arg value: Dict or string to convert to a dict. Accepts ``k1=v2, k2=v2``.
:returns: value converted to a dictionary
"""
if isinstance(value, dict):
return value
if isinstance(value, string_types):
if value.startswith("{"):
try:
return json.loads(value)
except Exception:
(result, exc) = safe_eval(value, dict(), include_exceptions=True)
if exc is not None:
raise TypeError('unable to evaluate string as dictionary')
return result
elif '=' in value:
fields = []
field_buffer = []
in_quote = False
in_escape = False
for c in value.strip():
if in_escape:
field_buffer.append(c)
in_escape = False
elif c == '\\':
in_escape = True
elif not in_quote and c in ('\'', '"'):
in_quote = c
elif in_quote and in_quote == c:
in_quote = False
elif not in_quote and c in (',', ' '):
field = ''.join(field_buffer)
if field:
fields.append(field)
field_buffer = []
else:
field_buffer.append(c)
field = ''.join(field_buffer)
if field:
fields.append(field)
return dict(x.split("=", 1) for x in fields)
else:
raise TypeError("dictionary requested, could not parse JSON or key=value")
raise TypeError('%s cannot be converted to a dict' % type(value))
def check_type_bool(value):
"""Verify that the value is a bool or convert it to a bool and return it.
Raises :class:`TypeError` if unable to convert to a bool
:arg value: String, int, or float to convert to bool. Valid booleans include:
'1', 'on', 1, '0', 0, 'n', 'f', 'false', 'true', 'y', 't', 'yes', 'no', 'off'
:returns: Boolean True or False
"""
if isinstance(value, bool):
return value
if isinstance(value, string_types) or isinstance(value, (int, float)):
return boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value))
def check_type_int(value):
"""Verify that the value is an integer and return it or convert the value
to an integer and return it
Raises :class:`TypeError` if unable to convert to an int
:arg value: String or int to convert of verify
:return: int of given value
"""
if isinstance(value, integer_types):
return value
if isinstance(value, string_types):
try:
return int(value)
except ValueError:
pass
raise TypeError('%s cannot be converted to an int' % type(value))
def check_type_float(value):
"""Verify that value is a float or convert it to a float and return it
Raises :class:`TypeError` if unable to convert to a float
:arg value: float, int, str, or bytes to verify or convert and return.
:returns: float of given value.
"""
if isinstance(value, float):
return value
if isinstance(value, (binary_type, text_type, int)):
try:
return float(value)
except ValueError:
pass
raise TypeError('%s cannot be converted to a float' % type(value))
def check_type_path(value,):
"""Verify the provided value is a string or convert it to a string,
then return the expanded path
"""
value = check_type_str(value)
return os.path.expanduser(os.path.expandvars(value))
def check_type_raw(value):
"""Returns the raw value"""
return value
def check_type_bytes(value):
"""Convert a human-readable string value to bytes
Raises :class:`TypeError` if unable to covert the value
"""
try:
return human_to_bytes(value)
except ValueError:
raise TypeError('%s cannot be converted to a Byte value' % type(value))
def check_type_bits(value):
"""Convert a human-readable string bits value to bits in integer.
Example: ``check_type_bits('1Mb')`` returns integer 1048576.
Raises :class:`TypeError` if unable to covert the value.
"""
try:
return human_to_bytes(value, isbits=True)
except ValueError:
raise TypeError('%s cannot be converted to a Bit value' % type(value))
def check_type_jsonarg(value):
"""Return a jsonified string. Sometimes the controller turns a json string
into a dict/list so transform it back into json here
Raises :class:`TypeError` if unable to covert the value
"""
if isinstance(value, (text_type, binary_type)):
return value.strip()
elif isinstance(value, (list, tuple, dict)):
return jsonify(value)
raise TypeError('%s cannot be converted to a json string' % type(value))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
test/integration/targets/apt_repository/tasks/apt.yml
|
---
- set_fact:
test_ppa_name: 'ppa:git-core/ppa'
test_ppa_filename: 'git-core'
test_ppa_spec: 'deb http://ppa.launchpad.net/git-core/ppa/ubuntu {{ansible_distribution_release}} main'
test_ppa_key: 'E1DF1F24' # http://keyserver.ubuntu.com:11371/pks/lookup?search=0xD06AAF4C11DAB86DF421421EFE6B20ECA7AD98A1&op=index
- name: show python version
debug: var=ansible_python_version
- name: use python-apt
set_fact:
python_apt: python-apt
when: ansible_python_version is version('3', '<')
- name: use python3-apt
set_fact:
python_apt: python3-apt
when: ansible_python_version is version('3', '>=')
# UNINSTALL 'python-apt'
# The `apt_repository` module has the smarts to auto-install `python-apt`. To
# test, we will first uninstall `python-apt`.
- name: check {{ python_apt }} with dpkg
shell: dpkg -s {{ python_apt }}
register: dpkg_result
ignore_errors: true
- name: uninstall {{ python_apt }} with apt
apt: pkg={{ python_apt }} state=absent purge=yes
register: apt_result
when: dpkg_result is successful
#
# TEST: apt_repository: repo=<name>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present
register: result
- name: 'assert the apt cache did *NOT* change'
assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<name> update_cache=no
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> update_cache=no (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present update_cache=no
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did *NOT* change'
assert:
that:
- 'cache_before.stat.mtime == cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<name> update_cache=yes
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: 'name=<name> update_cache=yes (expect: pass)'
apt_repository: repo='{{test_ppa_name}}' state=present update_cache=yes
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_name}}"'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: 'ensure ppa key is installed (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=present
#
# TEST: apt_repository: repo=<spec>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: ensure ppa key is present before adding repo that requires authentication
apt_key: keyserver=keyserver.ubuntu.com id='{{test_ppa_key}}' state=present
- name: 'name=<spec> (expect: pass)'
apt_repository: repo='{{test_ppa_spec}}' state=present
register: result
- name: update the cache
apt:
update_cache: true
register: result_cache
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_spec}}"'
- '"sources_added" in result'
- 'result.sources_added | length == 1'
- '"git" in result.sources_added[0]'
- '"sources_removed" in result'
- 'result.sources_removed | length == 0'
- result_cache is not changed
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
- name: remove repo by spec
apt_repository: repo='{{test_ppa_spec}}' state=absent
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "absent"'
- 'result.repo == "{{test_ppa_spec}}"'
- '"sources_added" in result'
- 'result.sources_added | length == 0'
- '"sources_removed" in result'
- 'result.sources_removed | length == 1'
- '"git" in result.sources_removed[0]'
# When installing a repo with the spec, the key is *NOT* added
- name: 'ensure ppa key is absent (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=absent
#
# TEST: apt_repository: repo=<spec> filename=<filename>
#
- import_tasks: 'cleanup.yml'
- name: 'record apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_before
- name: ensure ppa key is present before adding repo that requires authentication
apt_key: keyserver=keyserver.ubuntu.com id='{{test_ppa_key}}' state=present
- name: 'name=<spec> filename=<filename> (expect: pass)'
apt_repository: repo='{{test_ppa_spec}}' filename='{{test_ppa_filename}}' state=present
register: result
- assert:
that:
- 'result.changed'
- 'result.state == "present"'
- 'result.repo == "{{test_ppa_spec}}"'
- name: 'examine source file'
stat: path='/etc/apt/sources.list.d/{{test_ppa_filename}}.list'
register: source_file
- name: 'assert source file exists'
assert:
that:
- 'source_file.stat.exists == True'
- name: 'examine apt cache mtime'
stat: path='/var/cache/apt/pkgcache.bin'
register: cache_after
- name: 'assert the apt cache did change'
assert:
that:
- 'cache_before.stat.mtime != cache_after.stat.mtime'
# When installing a repo with the spec, the key is *NOT* added
- name: 'ensure ppa key is absent (expect: pass)'
apt_key: id='{{test_ppa_key}}' state=absent
- name: Test apt_repository with a null value for repo
apt_repository:
repo:
register: result
ignore_errors: yes
- assert:
that:
- result is failed
- result.msg == 'Please set argument \'repo\' to a non-empty value'
- name: Test apt_repository with an empty value for repo
apt_repository:
repo: ""
register: result
ignore_errors: yes
- assert:
that:
- result is failed
- result.msg == 'Please set argument \'repo\' to a non-empty value'
#
# TEARDOWN
#
- import_tasks: 'cleanup.yml'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
test/integration/targets/roles_arg_spec/roles/c/meta/main.yml
|
argument_specs:
main:
short_description: Main entry point for role C.
options:
c_int:
type: "int"
required: true
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,656 |
Argument spec validation for required dict var does not fail if dict var is set to None
|
### Summary
If the argument spec defines a dict variable as required:
- if the variable has no default in defaults/main.yml and is not provided in the playbook, argument spec validation will fail. That's OK.
- if the variable is defined as a string e.g. `''` or `'test-string'`, validation will fail. That's OK.
- if the variable is defines as None or `~`, validation will pass. That's NOT OK.
### Issue Type
Bug Report
### Component Name
argument_spec
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 51bddd862b) last updated 2023/01/03 22:32:16 (GMT +000)
config file = /home/nikos/projects/ansible-demo/ansible.cfg
configured module search path = ['/home/nikos/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/nikos/projects/ansible/lib/ansible
ansible collection location = /home/nikos/.ansible/collections:/usr/share/ansible/collections
executable location = /home/nikos/projects/ansible/bin/ansible
python version = 3.10.8 (main, Nov 1 2022, 14:18:21) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/nikos/projects/ansible-demo/ansible.cfg) = True
CACHE_PLUGIN(/home/nikos/projects/ansible-demo/ansible.cfg) = ansible.builtin.json
CACHE_PLUGIN_CONNECTION(/home/nikos/projects/ansible-demo/ansible.cfg) = facts_cache
CONFIG_FILE() = /home/nikos/projects/ansible-demo/ansible.cfg
DEFAULT_EXECUTABLE(/home/nikos/projects/ansible-demo/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/nikos/projects/ansible-demo/ansible.cfg) = implicit
DEFAULT_HOST_LIST(/home/nikos/projects/ansible-demo/ansible.cfg) = ['/home/nikos/projects/ansible-demo/inventory.ini']
DEFAULT_MANAGED_STR(/home/nikos/projects/ansible-demo/ansible.cfg) = \nAnsible managed (do not edit, changes may be overwritten)
EDITOR(env: EDITOR) = nvim
CACHE:
=====
jsonfile:
________
```
### OS / Environment
Controller and target: Arch Linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# role argument specs
---
argument_specs:
main:
options:
dict_var:
type: dict
required: true
```
```
# playbook
---
- hosts: all
roles:
- myrole
vars:
dict_var: ~
```
### Expected Results
I expect argument validation to fail with a message that None type is not dict or could not be converted to dict.
### Actual Results
```console
Validation passes and role execution succeeds.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79656
|
https://github.com/ansible/ansible/pull/79677
|
964e678a7fa3b0745f9302e7a3682851089d09d2
|
694c11d5bdc7f5f7779d27315bec939dc9162ec6
| 2023-01-03T22:46:49Z |
python
| 2023-04-17T19:42:58Z |
test/integration/targets/roles_arg_spec/test.yml
|
---
- hosts: localhost
gather_facts: false
roles:
- { role: a, a_str: "roles" }
vars:
INT_VALUE: 42
tasks:
- name: "Valid simple role usage with include_role"
include_role:
name: a
vars:
a_str: "include_role"
- name: "Valid simple role usage with import_role"
import_role:
name: a
vars:
a_str: "import_role"
- name: "Valid role usage (more args)"
include_role:
name: b
vars:
b_str: "xyz"
b_int: 5
b_bool: true
- name: "Valid simple role usage with include_role of different entry point"
include_role:
name: a
tasks_from: "alternate"
vars:
a_int: 256
- name: "Valid simple role usage with import_role of different entry point"
import_role:
name: a
tasks_from: "alternate"
vars:
a_int: 512
- name: "Valid simple role usage with a templated value"
import_role:
name: a
vars:
a_int: "{{ INT_VALUE }}"
a_str: "import_role"
- name: "Call role entry point that is defined, but has no spec data"
import_role:
name: a
tasks_from: "no_spec_entrypoint"
- name: "New play to reset vars: Test include_role fails"
hosts: localhost
gather_facts: false
vars:
expected_returned_spec:
b_bool:
required: true
type: "bool"
b_int:
required: true
type: "int"
b_str:
required: true
type: "str"
tasks:
- block:
- name: "Invalid role usage"
include_role:
name: b
vars:
b_bool: 7
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate failure"
assert:
that:
- ansible_failed_task.name == "Validating arguments against arg spec 'main' - Main entry point for role B."
- ansible_failed_result.argument_errors | length == 2
- "'missing required arguments: b_int, b_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "b"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/b')"
- ansible_failed_result.argument_spec_data == expected_returned_spec
- name: "New play to reset vars: Test import_role fails"
hosts: localhost
gather_facts: false
vars:
expected_returned_spec:
b_bool:
required: true
type: "bool"
b_int:
required: true
type: "int"
b_str:
required: true
type: "str"
tasks:
- block:
- name: "Invalid role usage"
import_role:
name: b
vars:
b_bool: 7
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate failure"
assert:
that:
- ansible_failed_task.name == "Validating arguments against arg spec 'main' - Main entry point for role B."
- ansible_failed_result.argument_errors | length == 2
- "'missing required arguments: b_int, b_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "b"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/b')"
- ansible_failed_result.argument_spec_data == expected_returned_spec
- name: "New play to reset vars: Test nested role including/importing role succeeds"
hosts: localhost
gather_facts: false
vars:
c_int: 1
a_str: "some string"
a_int: 42
tasks:
- name: "Test import_role of role C"
import_role:
name: c
- name: "Test include_role of role C"
include_role:
name: c
- name: "New play to reset vars: Test nested role including/importing role fails"
hosts: localhost
gather_facts: false
vars:
main_expected_returned_spec:
a_str:
required: true
type: "str"
alternate_expected_returned_spec:
a_int:
required: true
type: "int"
tasks:
- block:
- name: "Test import_role of role C (missing a_str)"
import_role:
name: c
vars:
c_int: 100
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "a"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/a')"
- ansible_failed_result.argument_spec_data == main_expected_returned_spec
- block:
- name: "Test include_role of role C (missing a_int from `alternate` entry point)"
include_role:
name: c
vars:
c_int: 200
a_str: "some string"
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate include_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_int' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "alternate"
- ansible_failed_result.validate_args_context.name == "a"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/a')"
- ansible_failed_result.argument_spec_data == alternate_expected_returned_spec
- name: "New play to reset vars: Test role with no tasks can fail"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Test import_role of role role_with_no_tasks (missing a_str)"
import_role:
name: role_with_no_tasks
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# NOTE: a bug here that prevents us from getting ansible_failed_task
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: a_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "role_with_no_tasks"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/roles/role_with_no_tasks')"
- name: "New play to reset vars: Test disabling role validation with rolespec_validate=False"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Test import_role of role C (missing a_str), but validation turned off"
import_role:
name: c
rolespec_validate: False
- fail:
msg: "Should not get here"
rescue:
- debug:
var: ansible_failed_result
- name: "Validate import_role failure"
assert:
that:
# We expect the role to actually run, but will fail because an undefined variable was referenced
# and validation wasn't performed up front (thus not returning 'argument_errors').
- "'argument_errors' not in ansible_failed_result"
- "'The task includes an option with an undefined variable.' in ansible_failed_result.msg"
- name: "New play to reset vars: Test collection-based role"
hosts: localhost
gather_facts: false
tasks:
- name: "Valid collection-based role usage"
import_role:
name: "foo.bar.blah"
vars:
blah_str: "some string"
- name: "New play to reset vars: Test collection-based role will fail"
hosts: localhost
gather_facts: false
tasks:
- block:
- name: "Invalid collection-based role usage"
import_role:
name: "foo.bar.blah"
- fail:
msg: "Should not get here"
rescue:
- debug: var=ansible_failed_result
- name: "Validate import_role failure for collection-based role"
assert:
that:
- ansible_failed_result.argument_errors | length == 1
- "'missing required arguments: blah_str' in ansible_failed_result.argument_errors"
- ansible_failed_result.validate_args_context.argument_spec_name == "main"
- ansible_failed_result.validate_args_context.name == "blah"
- ansible_failed_result.validate_args_context.type == "role"
- "ansible_failed_result.validate_args_context.path is search('roles_arg_spec/collections/ansible_collections/foo/bar/roles/blah')"
- name: "New play to reset vars: Test templating succeeds"
hosts: localhost
gather_facts: false
vars:
value_some_choices: "choice2"
value_some_list: [1.5]
value_some_dict: {"some_key": "some_value"}
value_some_int: 1
value_some_float: 1.5
value_some_json: '{[1, 3, 3] 345345|45v<#!}'
value_some_jsonarg: {"foo": [1, 3, 3]}
value_some_second_level: True
value_third_level: 5
tasks:
- block:
- include_role:
name: test1
vars:
some_choices: "{{ value_some_choices }}"
some_list: "{{ value_some_list }}"
some_dict: "{{ value_some_dict }}"
some_int: "{{ value_some_int }}"
some_float: "{{ value_some_float }}"
some_json: "{{ value_some_json }}"
some_jsonarg: "{{ value_some_jsonarg }}"
some_dict_options:
some_second_level: "{{ value_some_second_level }}"
multi_level_option:
second_level:
third_level: "{{ value_third_level }}"
rescue:
- debug: var=ansible_failed_result
- fail:
msg: "Should not get here"
- name: "New play to reset vars: Test empty argument_specs.yml"
hosts: localhost
gather_facts: false
tasks:
- name: Import role with an empty argument_specs.yml
import_role:
name: empty_file
- name: "New play to reset vars: Test empty argument_specs key"
hosts: localhost
gather_facts: false
tasks:
- name: Import role with an empty argument_specs key
import_role:
name: empty_argspec
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,479 |
Update Tools and Programs document
|
### Summary
Change the tools and program page based on [community feedback](https://github.com/ansible-community/community-topics/issues/220).
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/community/other_tools_and_programs.rst
### Ansible Version
```console
$ ansible --version
2.16
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80479
|
https://github.com/ansible/ansible/pull/80493
|
c1d8130df5c1bcefceb439bbf19cd8c926ce36d5
|
44794e3ebc04d90669d31b0ccde47c40aa48225f
| 2023-04-11T14:30:17Z |
python
| 2023-04-18T18:30:54Z |
docs/docsite/rst/community/other_tools_and_programs.rst
|
.. _other_tools_and_programs:
************************
Other Tools and Programs
************************
.. contents::
:local:
The Ansible community uses a range of tools for working with the Ansible project. This is a list of some of the most popular of these tools.
If you know of any other tools that should be added, this list can be updated by clicking "Edit on GitHub" on the top right of this page.
***************
Popular editors
***************
Atom
====
An open-source, free GUI text editor created and maintained by GitHub. You can keep track of git project
changes, commit from the GUI, and see what branch you are on. You can customize the themes for different colors and install syntax highlighting packages for different languages. You can install Atom on Linux, macOS and Windows. Useful Atom plugins include:
* `language-yaml <https://atom.io/packages/language-yaml>`_ - YAML highlighting for Atom (built-in).
* `linter-js-yaml <https://atom.io/packages/linter-js-yaml>`_ - parses your YAML files in Atom through js-yaml.
Emacs
=====
A free, open-source text editor and IDE that supports auto-indentation, syntax highlighting and built in terminal shell(among other things).
* `yaml-mode <https://github.com/yoshiki/yaml-mode>`_ - YAML highlighting and syntax checking.
* `jinja2-mode <https://github.com/paradoxxxzero/jinja2-mode>`_ - Jinja2 highlighting and syntax checking.
* `magit-mode <https://github.com/magit/magit>`_ - Git porcelain within Emacs.
* `lsp-mode <https://emacs-lsp.github.io/lsp-mode/page/lsp-ansible/>`_ - Ansible syntax highlighting, auto-completion and diagnostics.
PyCharm
=======
A full IDE (integrated development environment) for Python software development. It ships with everything you need to write python scripts and complete software, including support for YAML syntax highlighting. It's a little overkill for writing roles/playbooks, but it can be a very useful tool if you write modules and submit code for Ansible. Can be used to debug the Ansible engine.
Sublime
=======
A closed-source, subscription GUI text editor. You can customize the GUI with themes and install packages for language highlighting and other refinements. You can install Sublime on Linux, macOS and Windows. Useful Sublime plugins include:
* `GitGutter <https://packagecontrol.io/packages/GitGutter>`_ - shows information about files in a git repository.
* `SideBarEnhancements <https://packagecontrol.io/packages/SideBarEnhancements>`_ - provides enhancements to the operations on Sidebar of Files and Folders.
* `Sublime Linter <https://packagecontrol.io/packages/SublimeLinter>`_ - a code-linting framework for Sublime Text 3.
* `Pretty YAML <https://packagecontrol.io/packages/Pretty%20YAML>`_ - prettifies YAML for Sublime Text 2 and 3.
* `Yamllint <https://packagecontrol.io/packages/SublimeLinter-contrib-yamllint>`_ - a Sublime wrapper around yamllint.
Visual studio code
==================
An open-source, free GUI text editor created and maintained by Microsoft. Useful Visual Studio Code plugins include:
* `Ansible extension by Red Hat <https://marketplace.visualstudio.com/items?itemName=redhat.ansible>`_ - provides autocompletion, syntax highlighting, hover, diagnostics, goto support, and command to run ansible-playbook and ansible-navigator tool for both local and execution-environment setups.
* `YAML Support by Red Hat <https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml>`_ - provides YAML support through yaml-language-server with built-in Kubernetes and Kedge syntax support.
vim
===
An open-source, free command-line text editor. Useful vim plugins include:
* `Ansible vim <https://github.com/pearofducks/ansible-vim>`_ - vim syntax plugin for Ansible 2.x, it supports YAML playbooks, Jinja2 templates, and Ansible's hosts files.
* `Ansible vim and neovim plugin <https://www.npmjs.com/package/@yaegassy/coc-ansible>`_ - vim plugin (lsp client) for Ansible, it supports autocompletion, syntax highlighting, hover, diagnostics, and goto support.
JetBrains
=========
An open-source Community edition and closed-source Enterprise edition, integrated development environments based on IntelliJ's framework including IDEA, AppCode, CLion, GoLand, PhpStorm, PyCharm and others. Useful JetBrains platform plugins include:
* `Ansible <https://plugins.jetbrains.com/plugin/14893-ansible>`_ - general Ansible plugin provides auto-completion, role name suggestion and other handy features for working with playbooks and roles.
* `Ansible Vault Editor <https://plugins.jetbrains.com/plugin/14278-ansible-vault-editor>`_ - Ansible Vault Editor with auto encryption/decryption.
* `Ansible Lint <https://plugins.jetbrains.com/plugin/20905-ansible-lint>`__ - Ansible Lint integration with automatic/continuous annotation of errors, warnings, and info while editing.
*****************
Development tools
*****************
Finding related issues and PRs
==============================
There are various ways to find existing issues and pull requests (PRs)
- `PR by File <https://ansible.sivel.net/pr/byfile.html>`_ - shows a current list of all open pull requests by individual file. An essential tool for Ansible module maintainers.
- `jctanner's Ansible Tools <https://github.com/jctanner/ansible-tools>`_ - miscellaneous collection of useful helper scripts for Ansible development.
.. _validate-playbook-tools:
******************************
Tools for validating playbooks
******************************
- `Ansible Lint <https://docs.ansible.com/ansible-lint/index.html>`_ - a highly configurable linter for Ansible playbooks.
- `Ansible Review <https://github.com/willthames/ansible-review>`_ - an extension of Ansible Lint designed for code review.
- `Molecule <https://molecule.readthedocs.io/en/latest/>`_ - a testing framework for Ansible plays and roles.
- `yamllint <https://yamllint.readthedocs.io/en/stable/>`__ - a command-line utility to check syntax validity including key repetition and indentation issues.
***********
Other tools
***********
- `Ansible cmdb <https://github.com/fboender/ansible-cmdb>`_ - takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information.
- `Ansible Inventory Grapher <https://github.com/willthames/ansible-inventory-grapher>`_ - visually displays inventory inheritance hierarchies and at what level a variable is defined in inventory.
- `Ansible Language Server <https://www.npmjs.com/package/@ansible/ansible-language-server>`_ - a server that implements `language server protocol <https://microsoft.github.io/language-server-protocol/>`_ for Ansible.
- `Ansible Playbook Grapher <https://github.com/haidaraM/ansible-playbook-grapher>`_ - a command line tool to create a graph representing your Ansible playbook tasks and roles.
- `Ansible Shell <https://github.com/dominis/ansible-shell>`_ - an interactive shell for Ansible with built-in tab completion for all the modules.
- `Ansible Silo <https://github.com/groupon/ansible-silo>`_ - a self-contained Ansible environment by Docker.
- `Ansigenome <https://github.com/nickjj/ansigenome>`_ - a command line tool designed to help you manage your Ansible roles.
- `antsibull-changelog <https://github.com/ansible-community/antsibull-changelog>`_ - a changelog generator for Ansible collections.
- `antsibull-docs <https://github.com/ansible-community/antsibull-docs>`_ - generates docsites for collections and can validate collection documentation.
- `ARA <https://github.com/ansible-community/ara>`_ - ARA Records Ansible playbooks and makes them easier to understand and troubleshoot with a reporting API, UI and CLI.
- `Awesome Ansible <https://github.com/jdauphant/awesome-ansible>`_ - a collaboratively curated list of awesome Ansible resources.
- `AWX <https://github.com/ansible/awx>`_ - provides a web-based user interface, REST API, and task engine built on top of Ansible. Red Hat Ansible Automation Platform includes code from AWX.
- `Mitogen for Ansible <https://mitogen.networkgenomics.com/ansible_detailed.html>`_ - uses the `Mitogen <https://github.com/dw/mitogen/>`_ library to execute Ansible playbooks in a more efficient way (decreases the execution time).
- `nanvault <https://github.com/marcobellaccini/nanvault>`_ - a standalone tool to encrypt and decrypt files in the Ansible Vault format, featuring UNIX-style composability.
- `OpsTools-ansible <https://github.com/centos-opstools/opstools-ansible>`_ - uses Ansible to configure an environment that provides the support of `OpsTools <https://wiki.centos.org/SpecialInterestGroup/OpsTools>`_, namely centralized logging and analysis, availability monitoring, and performance monitoring.
- `Steampunk Spotter <https://pypi.org/project/steampunk-spotter/>`_ - provides an Assisted Automation Writing tool that analyzes and offers recommendations for your Ansible Playbooks.
- `TD4A <https://github.com/cidrblock/td4a>`_ - a template designer for automation. TD4A is a visual design aid for building and testing jinja2 templates. It will combine data in yaml format with a jinja2 template and render the output.
- `PHP-Ansible <https://github.com/maschmann/php-ansible>`_ - an object oriented Ansible wrapper for PHP.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,386 |
uri failed_when example is misleading
|
### Summary
One of the examples on https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html is this:
```
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
ansible.builtin.uri:
url: http://www.example.com
return_content: true
register: this
failed_when: "'AWESOME' not in this.content"
```
Despite the name claiming it does, this doesn't actually check that the page returns status 200. For example, the following will not fail despite the 404 status code (https://example.com/doesnotexist returns 404 but contains the word 'Example'):
```
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
uri:
url: https://example.com/doesnotexist
return_content: true
register: this
failed_when: "'Example' not in this.content"
```
### Issue Type
Documentation Report
### Component Name
uri
### Ansible Version
```console
$ ansible --version
ansible 2.9.16
config file = /root/ansible/ansible.cfg
configured module search path = ['/root/ansible/library']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_PIPELINING(/root/ansible/ansible.cfg) = True
DEFAULT_HASH_BEHAVIOUR(/root/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/root/ansible/ansible.cfg) = ['/root/ansible/inventories/staging/staging2']
DEFAULT_MODULE_PATH(/root/ansible/ansible.cfg) = ['/root/ansible/library']
DEFAULT_REMOTE_USER(/root/ansible/ansible.cfg) = root
DEFAULT_ROLES_PATH(/root/ansible/ansible.cfg) = ['/root/ansible/roles']
INVENTORY_IGNORE_EXTS(/root/ansible/ansible.cfg) = ['certs']
```
### OS / Environment
Red Hat Enterprise Linux release 8.3 (Ootpa)
### Additional Information
The status should be checked in failed_when. This will make it clear that you have to do it manually if you use failed_when. E.g.:
```
failed_when: "this.status not in this.invocation.module_args.status_code or 'Example' not in this.content"
```
(I'm not sure if this is the right way to get the `status_code` argument, but it seems to work)
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80386
|
https://github.com/ansible/ansible/pull/80554
|
560d5b00d05f3180fdaf3d86c55702be8f88f9a0
|
449c628f3d8dee4b93e0d3e6880b146ebb5486f0
| 2023-04-03T10:06:21Z |
python
| 2023-04-20T18:39:17Z |
lib/ansible/modules/uri.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
version_added: "1.1"
options:
ciphers:
description:
- SSL/TLS Ciphers to use for the request.
- 'When a list is provided, all ciphers are joined in order with C(:)'
- See the L(OpenSSL Cipher List Format,https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-LIST-FORMAT)
for more details.
- The available ciphers is dependent on the Python and OpenSSL/LibreSSL versions
type: list
elements: str
version_added: '2.14'
decompress:
description:
- Whether to attempt to decompress gzip content-encoded responses
type: bool
default: true
version_added: '2.14'
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
type: str
required: true
dest:
description:
- A path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
type: path
url_username:
description:
- A username for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ user ]
url_password:
description:
- A password for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ password ]
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON.
- If C(body_format) is set to 'form-urlencoded' it will convert a dictionary
or list of tuples into an 'application/x-www-form-urlencoded' string. (Added in v2.7)
- If C(body_format) is set to 'form-multipart' it will convert a dictionary
into 'multipart/form-multipart' body. (Added in v2.10)
type: raw
body_format:
description:
- The serialization format of the body. When set to C(json), C(form-multipart), or C(form-urlencoded), encodes
the body argument, if needed, and automatically sets the Content-Type header accordingly.
- As of v2.3 it is possible to override the C(Content-Type) header, when
set to C(json) or C(form-urlencoded) via the I(headers) option.
- The 'Content-Type' header cannot be overridden when using C(form-multipart)
- C(form-urlencoded) was added in v2.7.
- C(form-multipart) was added in v2.10.
type: str
choices: [ form-urlencoded, json, raw, form-multipart ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response.
- In more recent versions we do not restrict the method at the module level anymore
but it still must be a valid method accepted by the service handling the request.
type: str
default: GET
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result no matter it succeeded or failed.
- Independently of this option, if the reported Content-type is "application/json", then the JSON is
always loaded into a key called C(json) in the dictionary results.
type: bool
default: no
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- When this setting is C(false), this module will first try an unauthenticated request, and when the server replies
with an C(HTTP 401) error, it will submit the Basic authentication header.
- When this setting is C(true), this module will immediately send a Basic authentication header on the first
request.
- "Use this setting in any of the following scenarios:"
- You know the webservice endpoint always requires HTTP Basic authentication, and you want to speed up your
requests by eliminating the first roundtrip.
- The web service does not properly send an HTTP 401 error to your client, so Ansible's HTTP library will not
properly respond with HTTP credentials, and logins will fail.
- The webservice bans or rate-limits clients that cause any HTTP 401 errors.
type: bool
default: no
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(true) and C(false) choices are accepted for backwards compatibility,
where C(true) is the equivalent of C(all) and C(false) is the equivalent of C(safe). C(true) and C(false)
are deprecated and will be removed in some future version of Ansible.
type: str
choices: ['all', 'no', 'none', 'safe', 'urllib2', 'yes']
default: safe
creates:
description:
- A filename, when it already exists, this step will not be run.
type: path
removes:
description:
- A filename, when it does not exist, this step will not be run.
type: path
status_code:
description:
- A list of valid, numeric, HTTP status codes that signifies success of the request.
type: list
elements: int
default: [ 200 ]
timeout:
description:
- The socket level timeout in seconds
type: int
default: 30
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash. As
of C(2.3) supplying C(Content-Type) here will override the header
generated by supplying C(json) or C(form-urlencoded) for I(body_format).
type: dict
default: {}
version_added: '2.1'
validate_certs:
description:
- If C(false), SSL certificates will not be validated.
- This should only set to C(false) used on personally controlled sites using self-signed certificates.
- Prior to 1.9.2 the code defaulted to C(false).
type: bool
default: true
version_added: '1.9.2'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, I(client_key) is not required
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If I(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
ca_path:
description:
- PEM formatted file that contains a CA certificate to be used for validation
type: path
version_added: '2.11'
src:
description:
- Path to file to be submitted to the remote server.
- Cannot be used with I(body).
- Should be used with I(force_basic_auth) to ensure success when the remote end sends a 401.
type: path
version_added: '2.7'
remote_src:
description:
- If C(false), the module will search for the C(src) on the controller node.
- If C(true), the module will search for the C(src) on the managed (remote) node.
type: bool
default: no
version_added: '2.7'
force:
description:
- If C(true) do not get a cached copy.
type: bool
default: no
use_proxy:
description:
- If C(false), it will not use a proxy, even if one is defined in an environment variable on the target hosts.
type: bool
default: true
unix_socket:
description:
- Path to Unix domain socket to use for connection
type: path
version_added: '2.8'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
unredirected_headers:
description:
- A list of header names that will not be sent on subsequent redirected requests. This list is case
insensitive. By default all headers will be redirected. In some cases it may be beneficial to list
headers such as C(Authorization) here to avoid potential credential exposure.
default: []
type: list
elements: str
version_added: '2.12'
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
use_netrc:
description:
- Determining whether to use credentials from ``~/.netrc`` file
- By default .netrc is used with Basic authentication headers
- When set to False, .netrc credentials are ignored
type: bool
default: true
version_added: '2.14'
extends_documentation_fragment:
- action_common_attributes
- files
attributes:
check_mode:
support: none
diff_mode:
support: none
platform:
platforms: posix
notes:
- The dependency on httplib2 was removed in Ansible 2.1.
- The module returns all the HTTP headers in lower-case.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
seealso:
- module: ansible.builtin.get_url
- module: ansible.windows.win_uri
author:
- Romeo Theriault (@romeotheriault)
'''
EXAMPLES = r'''
- name: Check that you can connect (GET) to a page and it returns a status 200
ansible.builtin.uri:
url: http://www.example.com
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
ansible.builtin.uri:
url: http://www.example.com
return_content: true
register: this
failed_when: "'AWESOME' not in this.content"
- name: Create a JIRA issue
ansible.builtin.uri:
url: https://your.jira.example.com/rest/api/2/issue/
user: your_username
password: your_pass
method: POST
body: "{{ lookup('ansible.builtin.file','issue.json') }}"
force_basic_auth: true
status_code: 201
body_format: json
- name: Login to a form based webpage, then use the returned cookie to access the app in later tasks
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
name: your_username
password: your_password
enter: Sign in
status_code: 302
register: login
- name: Login to a form based webpage using a list of tuples
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
- [ name, your_username ]
- [ password, your_password ]
- [ enter, Sign in ]
status_code: 302
register: login
- name: Upload a file via multipart/form-multipart
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
body_format: form-multipart
body:
file1:
filename: /bin/true
mime_type: application/octet-stream
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field: value
- name: Connect to website using a previously stored cookie
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: true
headers:
Cookie: "{{ login.cookies_string }}"
- name: Queue build of a project in Jenkins
ansible.builtin.uri:
url: http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
method: GET
force_basic_auth: true
status_code: 201
- name: POST from contents of local file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: file.json
- name: POST from contents of remote file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: /path/to/my/file.json
remote_src: true
- name: Create workspaces in Log analytics Azure
ansible.builtin.uri:
url: https://www.mms.microsoft.com/Embedded/Api/ConfigDataSources/LogManagementData/Save
method: POST
body_format: json
status_code: [200, 202]
return_content: true
headers:
Content-Type: application/json
x-ms-client-workspace-path: /subscriptions/{{ sub_id }}/resourcegroups/{{ res_group }}/providers/microsoft.operationalinsights/workspaces/{{ w_spaces }}
x-ms-client-platform: ibiza
x-ms-client-auth-token: "{{ token_az }}"
body:
- name: Pause play until a URL is reachable from this host
ansible.builtin.uri:
url: "http://192.0.2.1/some/test"
follow_redirects: none
method: GET
register: _result
until: _result.status == 200
retries: 720 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
- name: Provide SSL/TLS ciphers as a list
uri:
url: https://example.org
ciphers:
- '@SECLEVEL=2'
- ECDH+AESGCM
- ECDH+CHACHA20
- ECDH+AES
- DHE+AES
- '!aNULL'
- '!eNULL'
- '!aDSS'
- '!SHA1'
- '!AESCCM'
- name: Provide SSL/TLS ciphers as an OpenSSL formatted cipher list
uri:
url: https://example.org
ciphers: '@SECLEVEL=2:ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES:DHE+AES:!aNULL:!eNULL:!aDSS:!SHA1:!AESCCM'
'''
RETURN = r'''
# The return information includes all the HTTP headers in lower-case.
content:
description: The response body content.
returned: status not in status_code or return_content is true
type: str
sample: "{}"
cookies:
description: The cookie values placed in cookie jar.
returned: on success
type: dict
sample: {"SESSIONID": "[SESSIONID]"}
version_added: "2.4"
cookies_string:
description: The value for future request Cookie headers.
returned: on success
type: str
sample: "SESSIONID=[SESSIONID]"
version_added: "2.6"
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: on success
type: int
sample: 23
msg:
description: The HTTP message from the request.
returned: always
type: str
sample: OK (unknown bytes)
path:
description: destination file/path
returned: dest is defined
type: str
sample: /path/to/file.txt
redirected:
description: Whether the request was redirected.
returned: on success
type: bool
sample: false
status:
description: The HTTP status code from the request.
returned: always
type: int
sample: 200
url:
description: The actual URL used for the request.
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import json
import os
import re
import shutil
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule, sanitize_keys
from ansible.module_utils.six import PY2, PY3, binary_type, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six.moves.collections_abc import Mapping, Sequence
from ansible.module_utils.urls import fetch_url, get_response_filename, parse_content_type, prepare_multipart, url_argument_spec
JSON_CANDIDATES = {'json', 'javascript'}
# List of response key names we do not want sanitize_keys() to change.
NO_MODIFY_KEYS = frozenset(
('msg', 'exception', 'warnings', 'deprecations', 'failed', 'skipped',
'changed', 'rc', 'stdout', 'stderr', 'elapsed', 'path', 'location',
'content_type')
)
def format_message(err, resp):
msg = resp.pop('msg')
return err + (' %s' % msg if msg else '')
def write_file(module, dest, content, resp):
"""
Create temp file and write content to dest file only if content changed
"""
tmpsrc = None
try:
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(fd, 'wb') as f:
if isinstance(content, binary_type):
f.write(content)
else:
shutil.copyfileobj(content, f)
except Exception as e:
if tmpsrc and os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("Failed to create temporary content file: %s" % to_native(e), resp)
module.fail_json(msg=msg, **resp)
checksum_src = module.sha1(tmpsrc)
checksum_dest = module.sha1(dest)
if checksum_src != checksum_dest:
try:
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), resp)
module.fail_json(msg=msg, **resp)
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def kv_list(data):
''' Convert data into a list of key-value tuples '''
if data is None:
return None
if isinstance(data, Sequence):
return list(data)
if isinstance(data, Mapping):
return list(data.items())
raise TypeError('cannot form-urlencode body, expect list or dict')
def form_urlencoded(body):
''' Convert data into a form-urlencoded string '''
if isinstance(body, string_types):
return body
if isinstance(body, (Mapping, Sequence)):
result = []
# Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)):
values = [values]
for value in values:
if value is not None:
result.append((to_text(key), to_text(value)))
return urlencode(result, doseq=True)
return body
def uri(module, url, dest, body, body_format, method, headers, socket_timeout, ca_path, unredirected_headers, decompress,
ciphers, use_netrc):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
src = module.params['src']
if src:
try:
headers.update({
'Content-Length': os.stat(src).st_size
})
data = open(src, 'rb')
except OSError:
module.fail_json(msg='Unable to open source file %s' % src, elapsed=0)
else:
data = body
kwargs = {}
if dest is not None and os.path.isfile(dest):
# if destination file already exist, only download if file newer
kwargs['last_mod_time'] = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
resp, info = fetch_url(module, url, data=data, headers=headers,
method=method, timeout=socket_timeout, unix_socket=module.params['unix_socket'],
ca_path=ca_path, unredirected_headers=unredirected_headers,
use_proxy=module.params['use_proxy'], decompress=decompress,
ciphers=ciphers, use_netrc=use_netrc, **kwargs)
if src:
# Try to close the open file handle
try:
data.close()
except Exception:
pass
return resp, info
def main():
argument_spec = url_argument_spec()
argument_spec.update(
dest=dict(type='path'),
url_username=dict(type='str', aliases=['user']),
url_password=dict(type='str', aliases=['password'], no_log=True),
body=dict(type='raw'),
body_format=dict(type='str', default='raw', choices=['form-urlencoded', 'json', 'raw', 'form-multipart']),
src=dict(type='path'),
method=dict(type='str', default='GET'),
return_content=dict(type='bool', default=False),
follow_redirects=dict(type='str', default='safe', choices=['all', 'no', 'none', 'safe', 'urllib2', 'yes']),
creates=dict(type='path'),
removes=dict(type='path'),
status_code=dict(type='list', elements='int', default=[200]),
timeout=dict(type='int', default=30),
headers=dict(type='dict', default={}),
unix_socket=dict(type='path'),
remote_src=dict(type='bool', default=False),
ca_path=dict(type='path', default=None),
unredirected_headers=dict(type='list', elements='str', default=[]),
decompress=dict(type='bool', default=True),
ciphers=dict(type='list', elements='str'),
use_netrc=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
mutually_exclusive=[['body', 'src']],
)
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method'].upper()
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
ca_path = module.params['ca_path']
dict_headers = module.params['headers']
unredirected_headers = module.params['unredirected_headers']
decompress = module.params['decompress']
ciphers = module.params['ciphers']
use_netrc = module.params['use_netrc']
if not re.match('^[A-Z]+$', method):
module.fail_json(msg="Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types):
body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded':
if not isinstance(body, string_types):
try:
body = form_urlencoded(body)
except ValueError as e:
module.fail_json(msg='failed to parse body as form_urlencoded: %s' % to_native(e), elapsed=0)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/x-www-form-urlencoded'
elif body_format == 'form-multipart':
try:
content_type, body = prepare_multipart(body)
except (TypeError, ValueError) as e:
module.fail_json(msg='failed to parse body as form-multipart: %s' % to_native(e))
dict_headers['Content-Type'] = content_type
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since '%s' exists" % creates, changed=False)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since '%s' does not exist" % removes, changed=False)
# Make the request
start = datetime.datetime.utcnow()
r, info = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout, ca_path, unredirected_headers,
decompress, ciphers, use_netrc)
elapsed = (datetime.datetime.utcnow() - start).seconds
if r and dest is not None and os.path.isdir(dest):
filename = get_response_filename(r) or 'index.html'
dest = os.path.join(dest, filename)
if r and r.fp is not None:
# r may be None for some errors
# r.fp may be None depending on the error, which means there are no headers either
content_type, main_type, sub_type, content_encoding = parse_content_type(r)
else:
content_type = 'application/octet-stream'
main_type = 'application'
sub_type = 'octet-stream'
content_encoding = 'utf-8'
maybe_json = content_type and sub_type.lower() in JSON_CANDIDATES
maybe_output = maybe_json or return_content or info['status'] not in status_code
if maybe_output:
try:
if PY3 and (r.fp is None or r.closed):
raise TypeError
content = r.read()
except (AttributeError, TypeError):
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', b'')
elif r:
content = r
else:
content = None
resp = {}
resp['redirected'] = info['url'] != url
resp.update(info)
resp['elapsed'] = elapsed
resp['status'] = int(resp['status'])
resp['changed'] = False
# Write the file out if requested
if r and dest is not None:
if resp['status'] in status_code and resp['status'] != 304:
write_file(module, dest, content, resp)
# allow file attribute changes
resp['changed'] = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params, path=dest)
resp['changed'] = module.set_fs_attributes_if_different(file_args, resp['changed'])
resp['path'] = dest
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
# Default content_encoding to try
if isinstance(content, binary_type):
u_content = to_text(content, encoding=content_encoding)
if maybe_json:
try:
js = json.loads(u_content)
uresp['json'] = js
except Exception:
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = None
if module.no_log_values:
uresp = sanitize_keys(uresp, module.no_log_values, NO_MODIFY_KEYS)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was %s and not %s: %s' % (resp['status'], status_code, uresp.get('msg', ''))
if return_content:
module.fail_json(content=u_content, **uresp)
else:
module.fail_json(**uresp)
elif return_content:
module.exit_json(content=u_content, **uresp)
else:
module.exit_json(**uresp)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,533 |
systemd module result is different than calling systemctl to stop and disable
|
### Summary
when i use ansible systemd module, but the result is different between using systemd module and not using systemd module.
```
- name: using systemd module
systemd:
name: my.service
state: stopped
enabled: no
```
```
- name: using just command stop
command: systemctl stop my.service
- name: using just command disable
command: systemctl disable my.service
```
In Detail, my.service's killMode is process (KillMode=process), so if i stop my.service, another process in my.service cgroup are not killed. but when i use systemd module to stop and disable, all process in my.service cgroup are killed
### Issue Type
Bug Report
### Component Name
systemd
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.10]
config file = /Users/kakao_ent/cloudms/git_ms/msg-deploy/ansible/ansible.cfg
configured module search path = ['/Users/kakao_ent/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/kakao_ent/cloudms/git_ms/msg-deploy/ansible/env/lib/python3.7/site-packages/ansible
ansible collection location = /Users/kakao_ent/.ansible/collections:/usr/share/ansible/collections
executable location = env/bin/ansible
python version = 3.7.8 (v3.7.8:4b47a5b6ba, Jun 27 2020, 04:47:50) [Clang 6.0 (clang-600.0.57)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Ubuntu 18.04
### Steps to Reproduce
1. not working task
```
- name: using systemd module
systemd:
name: my.service
state: stopped
enabled: no
```
2. working task
```
- name: using just command stop
command: systemctl stop my.service
- name: using just command disable
command: systemctl disable my.service
```
and the service file attribute have to add like this
`KillMode=process`
### Expected Results
same as summary
### Actual Results
```console
same as summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80533
|
https://github.com/ansible/ansible/pull/80570
|
f05abd4540f7c26ae7296c59a3fdd579c4bf3070
|
9ca863501c6f3cf679b1b7c773747766e35ae907
| 2023-04-17T07:35:29Z |
python
| 2023-04-20T18:44:08Z |
lib/ansible/modules/systemd_service.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Brian Coca <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
module: systemd_service
author:
- Ansible Core Team
version_added: "2.2"
short_description: Manage systemd units
description:
- Controls systemd units (services, timers, and so on) on remote hosts.
options:
name:
description:
- Name of the unit. This parameter takes the name of exactly one unit to work with.
- When no extension is given, it is implied to a C(.service) as systemd.
- When using in a chroot environment you always need to specify the name of the unit with the extension. For example, C(crond.service).
type: str
aliases: [ service, unit ]
state:
description:
- C(started)/C(stopped) are idempotent actions that will not run commands unless necessary.
C(restarted) will always bounce the unit. C(reloaded) will always reload.
type: str
choices: [ reloaded, restarted, started, stopped ]
enabled:
description:
- Whether the unit should start on boot. B(At least one of state and enabled are required.)
type: bool
force:
description:
- Whether to override existing symlinks.
type: bool
version_added: 2.6
masked:
description:
- Whether the unit should be masked or not, a masked unit is impossible to start.
type: bool
daemon_reload:
description:
- Run daemon-reload before doing any other operations, to make sure systemd has read any changes.
- When set to C(true), runs daemon-reload even if the module does not start or stop anything.
type: bool
default: no
aliases: [ daemon-reload ]
daemon_reexec:
description:
- Run daemon_reexec command before doing any other operations, the systemd manager will serialize the manager state.
type: bool
default: no
aliases: [ daemon-reexec ]
version_added: "2.8"
scope:
description:
- Run systemctl within a given service manager scope, either as the default system scope C(system),
the current user's scope C(user), or the scope of all users C(global).
- "For systemd to work with 'user', the executing user must have its own instance of dbus started and accessible (systemd requirement)."
- "The user dbus process is normally started during normal login, but not during the run of Ansible tasks.
Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error."
- The user must have access, normally given via setting the C(XDG_RUNTIME_DIR) variable, see example below.
type: str
choices: [ system, user, global ]
default: system
version_added: "2.7"
no_block:
description:
- Do not synchronously wait for the requested operation to finish.
Enqueued job will continue without Ansible blocking on its completion.
type: bool
default: no
version_added: "2.3"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- Since 2.4, one of the following options is required C(state), C(enabled), C(masked), C(daemon_reload), (C(daemon_reexec) since 2.8),
and all except C(daemon_reload) and (C(daemon_reexec) since 2.8) also require C(name).
- Before 2.4 you always required C(name).
- Globs are not supported in name, i.e C(postgres*.service).
- The service names might vary by specific OS/distribution
requirements:
- A system managed by systemd.
'''
EXAMPLES = '''
- name: Make sure a service unit is running
ansible.builtin.systemd:
state: started
name: httpd
- name: Stop service cron on debian, if running
ansible.builtin.systemd:
name: cron
state: stopped
- name: Restart service cron on centos, in all cases, also issue daemon-reload to pick up config changes
ansible.builtin.systemd:
state: restarted
daemon_reload: true
name: crond
- name: Reload service httpd, in all cases
ansible.builtin.systemd:
name: httpd.service
state: reloaded
- name: Enable service httpd and ensure it is not masked
ansible.builtin.systemd:
name: httpd
enabled: true
masked: no
- name: Enable a timer unit for dnf-automatic
ansible.builtin.systemd:
name: dnf-automatic.timer
state: started
enabled: true
- name: Just force systemd to reread configs (2.4 and above)
ansible.builtin.systemd:
daemon_reload: true
- name: Just force systemd to re-execute itself (2.8 and above)
ansible.builtin.systemd:
daemon_reexec: true
- name: Run a user service when XDG_RUNTIME_DIR is not set on remote login
ansible.builtin.systemd:
name: myservice
state: started
scope: user
environment:
XDG_RUNTIME_DIR: "/run/user/{{ myuid }}"
'''
RETURN = '''
status:
description: A dictionary with the key=value pairs returned from C(systemctl show).
returned: success
type: complex
sample: {
"ActiveEnterTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ActiveEnterTimestampMonotonic": "8135942",
"ActiveExitTimestampMonotonic": "0",
"ActiveState": "active",
"After": "auditd.service systemd-user-sessions.service time-sync.target systemd-journald.socket basic.target system.slice",
"AllowIsolate": "no",
"Before": "shutdown.target multi-user.target",
"BlockIOAccounting": "no",
"BlockIOWeight": "1000",
"CPUAccounting": "no",
"CPUSchedulingPolicy": "0",
"CPUSchedulingPriority": "0",
"CPUSchedulingResetOnFork": "no",
"CPUShares": "1024",
"CanIsolate": "no",
"CanReload": "yes",
"CanStart": "yes",
"CanStop": "yes",
"CapabilityBoundingSet": "18446744073709551615",
"ConditionResult": "yes",
"ConditionTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ConditionTimestampMonotonic": "7902742",
"Conflicts": "shutdown.target",
"ControlGroup": "/system.slice/crond.service",
"ControlPID": "0",
"DefaultDependencies": "yes",
"Delegate": "no",
"Description": "Command Scheduler",
"DevicePolicy": "auto",
"EnvironmentFile": "/etc/sysconfig/crond (ignore_errors=no)",
"ExecMainCode": "0",
"ExecMainExitTimestampMonotonic": "0",
"ExecMainPID": "595",
"ExecMainStartTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"ExecMainStartTimestampMonotonic": "8134990",
"ExecMainStatus": "0",
"ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"ExecStart": "{ path=/usr/sbin/crond ; argv[]=/usr/sbin/crond -n $CRONDARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }",
"FragmentPath": "/usr/lib/systemd/system/crond.service",
"GuessMainPID": "yes",
"IOScheduling": "0",
"Id": "crond.service",
"IgnoreOnIsolate": "no",
"IgnoreOnSnapshot": "no",
"IgnoreSIGPIPE": "yes",
"InactiveEnterTimestampMonotonic": "0",
"InactiveExitTimestamp": "Sun 2016-05-15 18:28:49 EDT",
"InactiveExitTimestampMonotonic": "8135942",
"JobTimeoutUSec": "0",
"KillMode": "process",
"KillSignal": "15",
"LimitAS": "18446744073709551615",
"LimitCORE": "18446744073709551615",
"LimitCPU": "18446744073709551615",
"LimitDATA": "18446744073709551615",
"LimitFSIZE": "18446744073709551615",
"LimitLOCKS": "18446744073709551615",
"LimitMEMLOCK": "65536",
"LimitMSGQUEUE": "819200",
"LimitNICE": "0",
"LimitNOFILE": "4096",
"LimitNPROC": "3902",
"LimitRSS": "18446744073709551615",
"LimitRTPRIO": "0",
"LimitRTTIME": "18446744073709551615",
"LimitSIGPENDING": "3902",
"LimitSTACK": "18446744073709551615",
"LoadState": "loaded",
"MainPID": "595",
"MemoryAccounting": "no",
"MemoryLimit": "18446744073709551615",
"MountFlags": "0",
"Names": "crond.service",
"NeedDaemonReload": "no",
"Nice": "0",
"NoNewPrivileges": "no",
"NonBlocking": "no",
"NotifyAccess": "none",
"OOMScoreAdjust": "0",
"OnFailureIsolate": "no",
"PermissionsStartOnly": "no",
"PrivateNetwork": "no",
"PrivateTmp": "no",
"RefuseManualStart": "no",
"RefuseManualStop": "no",
"RemainAfterExit": "no",
"Requires": "basic.target",
"Restart": "no",
"RestartUSec": "100ms",
"Result": "success",
"RootDirectoryStartOnly": "no",
"SameProcessGroup": "no",
"SecureBits": "0",
"SendSIGHUP": "no",
"SendSIGKILL": "yes",
"Slice": "system.slice",
"StandardError": "inherit",
"StandardInput": "null",
"StandardOutput": "journal",
"StartLimitAction": "none",
"StartLimitBurst": "5",
"StartLimitInterval": "10000000",
"StatusErrno": "0",
"StopWhenUnneeded": "no",
"SubState": "running",
"SyslogLevelPrefix": "yes",
"SyslogPriority": "30",
"TTYReset": "no",
"TTYVHangup": "no",
"TTYVTDisallocate": "no",
"TimeoutStartUSec": "1min 30s",
"TimeoutStopUSec": "1min 30s",
"TimerSlackNSec": "50000",
"Transient": "no",
"Type": "simple",
"UMask": "0022",
"UnitFileState": "enabled",
"WantedBy": "multi-user.target",
"Wants": "system.slice",
"WatchdogTimestampMonotonic": "0",
"WatchdogUSec": "0",
}
''' # NOQA
import os
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.facts.system.chroot import is_chroot
from ansible.module_utils.service import sysv_exists, sysv_is_enabled, fail_if_missing
from ansible.module_utils._text import to_native
def is_running_service(service_status):
return service_status['ActiveState'] in set(['active', 'activating'])
def is_deactivating_service(service_status):
return service_status['ActiveState'] in set(['deactivating'])
def request_was_ignored(out):
return '=' not in out and ('ignoring request' in out or 'ignoring command' in out)
def parse_systemctl_show(lines):
# The output of 'systemctl show' can contain values that span multiple lines. At first glance it
# appears that such values are always surrounded by {}, so the previous version of this code
# assumed that any value starting with { was a multi-line value; it would then consume lines
# until it saw a line that ended with }. However, it is possible to have a single-line value
# that starts with { but does not end with } (this could happen in the value for Description=,
# for example), and the previous version of this code would then consume all remaining lines as
# part of that value. Cryptically, this would lead to Ansible reporting that the service file
# couldn't be found.
#
# To avoid this issue, the following code only accepts multi-line values for keys whose names
# start with Exec (e.g., ExecStart=), since these are the only keys whose values are known to
# span multiple lines.
parsed = {}
multival = []
k = None
for line in lines:
if k is None:
if '=' in line:
k, v = line.split('=', 1)
if k.startswith('Exec') and v.lstrip().startswith('{'):
if not v.rstrip().endswith('}'):
multival.append(v)
continue
parsed[k] = v.strip()
k = None
else:
multival.append(line)
if line.rstrip().endswith('}'):
parsed[k] = '\n'.join(multival).strip()
multival = []
k = None
return parsed
# ===========================================
# Main control flow
def main():
# initialize
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', aliases=['service', 'unit']),
state=dict(type='str', choices=['reloaded', 'restarted', 'started', 'stopped']),
enabled=dict(type='bool'),
force=dict(type='bool'),
masked=dict(type='bool'),
daemon_reload=dict(type='bool', default=False, aliases=['daemon-reload']),
daemon_reexec=dict(type='bool', default=False, aliases=['daemon-reexec']),
scope=dict(type='str', default='system', choices=['system', 'user', 'global']),
no_block=dict(type='bool', default=False),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled', 'masked', 'daemon_reload', 'daemon_reexec']],
required_by=dict(
state=('name', ),
enabled=('name', ),
masked=('name', ),
),
)
unit = module.params['name']
if unit is not None:
for globpattern in (r"*", r"?", r"["):
if globpattern in unit:
module.fail_json(msg="This module does not currently support using glob patterns, found '%s' in service name: %s" % (globpattern, unit))
systemctl = module.get_bin_path('systemctl', True)
if os.getenv('XDG_RUNTIME_DIR') is None:
os.environ['XDG_RUNTIME_DIR'] = '/run/user/%s' % os.geteuid()
# Set CLI options depending on params
# if scope is 'system' or None, we can ignore as there is no extra switch.
# The other choices match the corresponding switch
if module.params['scope'] != 'system':
systemctl += " --%s" % module.params['scope']
if module.params['no_block']:
systemctl += " --no-block"
if module.params['force']:
systemctl += " --force"
rc = 0
out = err = ''
result = dict(
name=unit,
changed=False,
status=dict(),
)
# Run daemon-reload first, if requested
if module.params['daemon_reload'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reload" % (systemctl))
if rc != 0:
if is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn('daemon-reload failed, but target is a chroot or systemd is offline. Continuing. Error was: %d / %s' % (rc, err))
else:
module.fail_json(msg='failure %d during daemon-reload: %s' % (rc, err))
# Run daemon-reexec
if module.params['daemon_reexec'] and not module.check_mode:
(rc, out, err) = module.run_command("%s daemon-reexec" % (systemctl))
if rc != 0:
if is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn('daemon-reexec failed, but target is a chroot or systemd is offline. Continuing. Error was: %d / %s' % (rc, err))
else:
module.fail_json(msg='failure %d during daemon-reexec: %s' % (rc, err))
if unit:
found = False
is_initd = sysv_exists(unit)
is_systemd = False
# check service data, cannot error out on rc as it changes across versions, assume not found
(rc, out, err) = module.run_command("%s show '%s'" % (systemctl, unit))
if rc == 0 and not (request_was_ignored(out) or request_was_ignored(err)):
# load return of systemctl show into dictionary for easy access and return
if out:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
is_systemd = 'LoadState' in result['status'] and result['status']['LoadState'] != 'not-found'
is_masked = 'LoadState' in result['status'] and result['status']['LoadState'] == 'masked'
# Check for loading error
if is_systemd and not is_masked and 'LoadError' in result['status']:
module.fail_json(msg="Error loading unit file '%s': %s" % (unit, result['status']['LoadError']))
# Workaround for https://github.com/ansible/ansible/issues/71528
elif err and rc == 1 and 'Failed to parse bus message' in err:
result['status'] = parse_systemctl_show(to_native(out).split('\n'))
unit_base, sep, suffix = unit.partition('@')
unit_search = '{unit_base}{sep}'.format(unit_base=unit_base, sep=sep)
(rc, out, err) = module.run_command("{systemctl} list-unit-files '{unit_search}*'".format(systemctl=systemctl, unit_search=unit_search))
is_systemd = unit_search in out
(rc, out, err) = module.run_command("{systemctl} is-active '{unit}'".format(systemctl=systemctl, unit=unit))
result['status']['ActiveState'] = out.rstrip('\n')
else:
# list taken from man systemctl(1) for systemd 244
valid_enabled_states = [
"enabled",
"enabled-runtime",
"linked",
"linked-runtime",
"masked",
"masked-runtime",
"static",
"indirect",
"disabled",
"generated",
"transient"]
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
if out.strip() in valid_enabled_states:
is_systemd = True
else:
# fallback list-unit-files as show does not work on some systems (chroot)
# not used as primary as it skips some services (like those using init.d) and requires .service/etc notation
(rc, out, err) = module.run_command("%s list-unit-files '%s'" % (systemctl, unit))
if rc == 0:
is_systemd = True
else:
# Check for systemctl command
module.run_command(systemctl, check_rc=True)
# Does service exist?
found = is_systemd or is_initd
if is_initd and not is_systemd:
module.warn('The service (%s) is actually an init script but the system is managed by systemd' % unit)
# mask/unmask the service, if requested, can operate on services before they are installed
if module.params['masked'] is not None:
# state is not masked unless systemd affirms otherwise
(rc, out, err) = module.run_command("%s is-enabled '%s'" % (systemctl, unit))
masked = out.strip() == "masked"
if masked != module.params['masked']:
result['changed'] = True
if module.params['masked']:
action = 'mask'
else:
action = 'unmask'
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
# some versions of system CAN mask/unmask non existing services, we only fail on missing if they don't
fail_if_missing(module, found, unit, msg='host')
# Enable/disable service startup at boot if requested
if module.params['enabled'] is not None:
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
fail_if_missing(module, found, unit, msg='host')
# do we need to enable the service?
enabled = False
(rc, out, err) = module.run_command("%s is-enabled '%s' -l" % (systemctl, unit))
# check systemctl result or if it is a init script
if rc == 0:
enabled = True
# Check if the service is indirect or alias and if out contains exactly 1 line of string 'indirect'/ 'alias' it's disabled
if out.splitlines() == ["indirect"] or out.splitlines() == ["alias"]:
enabled = False
elif rc == 1:
# if not a user or global user service and both init script and unit file exist stdout should have enabled/disabled, otherwise use rc entries
if module.params['scope'] == 'system' and \
is_initd and \
not out.strip().endswith('disabled') and \
sysv_is_enabled(unit):
enabled = True
# default to current state
result['enabled'] = enabled
# Change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, out + err))
result['enabled'] = not enabled
# set service state if requested
if module.params['state'] is not None:
fail_if_missing(module, found, unit, msg="host")
# default to desired state
result['state'] = module.params['state']
# What is current service state?
if 'ActiveState' in result['status']:
action = None
if module.params['state'] == 'started':
if not is_running_service(result['status']):
action = 'start'
elif module.params['state'] == 'stopped':
if is_running_service(result['status']) or is_deactivating_service(result['status']):
action = 'stop'
else:
if not is_running_service(result['status']):
action = 'start'
else:
action = module.params['state'][:-2] # remove 'ed' from restarted/reloaded
result['state'] = 'started'
if action:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = module.run_command("%s %s '%s'" % (systemctl, action, unit))
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, unit, err))
# check for chroot
elif is_chroot(module) or os.environ.get('SYSTEMD_OFFLINE') == '1':
module.warn("Target is a chroot or systemd is offline. This can lead to false positives or prevent the init system tools from working.")
else:
# this should not happen?
module.fail_json(msg="Service is in unknown state", status=result['status'])
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,575 |
Update example documentation block
|
### Summary
The existing example documentation block is pointed to from the docs but hasn't been updated for a few years.
Update the example to match the current requirements at https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_documenting.html
### Issue Type
Documentation Report
### Component Name
examples/DOCUMENTATION.yml
### Ansible Version
```console
$ ansible --version
2.16
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80575
|
https://github.com/ansible/ansible/pull/80579
|
791510ccba5f3a9af3d22f442e9d4d10b1129a00
|
a4fb670e9c43d9bcc9c1ed0b235514f7bcf32af2
| 2023-04-19T16:50:00Z |
python
| 2023-04-20T19:03:09Z |
examples/DOCUMENTATION.yml
|
---
# If a key doesn't apply to your module (ex: choices, default, or
# aliases) you can use the word 'null', or an empty list, [], where
# appropriate.
# See https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_documenting.html for more information
#
module: modulename
short_description: This is a sentence describing the module
description:
- Longer description of the module.
- You might include instructions.
version_added: "X.Y"
author: "Your AWESOME name (@awesome-github-id)"
options:
# One or more of the following
option_name:
description:
- Description of the options goes here.
- Must be written in sentences.
required: true or false
default: a string or the word null
choices:
- enable
- disable
aliases:
- repo_name
version_added: "1.X"
notes:
- Other things consumers of your module should know.
requirements:
- list of required things.
- like the factor package
- zypper >= 1.0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,376 |
Package manager discovery makes incorrect assumptions about dnf availability
|
### Summary
fedora-minimal:38 minimal containers only contain microdnf which now points to dnf5. See https://fedoraproject.org/wiki/Changes/MajorUpgradeOfMicrodnf. The package manager discovery code added https://github.com/ansible/ansible/pull/80272/ only selects dnf5 on Fedora >= 39 and doesn't consider it otherwise. In the fedora-minimal:38 container, `ansible_pkg_manager` is set to `unknown` when it should be set to `dnf5`.
Package manager discovery should **prefer** dnf5 on Fedora 39, but it shouldn't ignore dnf4 on Fedora >= 39 if dnf5 is missing (users can exclude dnf5 from their systems if they'd like), and it shouldn't ignore dnf5 on Fedora < 39 when dnf4 is missing (the fedora-minimal:38 container is one example of where this assumption breaks).
### Issue Type
Bug Report
### Component Name
dnf5
lib/ansible/module_utils/facts/system/pkg_mgr.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable l\ocation = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Fedora 38 instance with dnf5 but not dnf
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```Dockerfile
# fedora-minimal only has microdnf, which has been replaced by dnf5
FROM registry.fedoraproject.org/fedora-minimal:38
WORKDIR /root
RUN dnf5 install -qy python3 python3-libdnf5
RUN python3 -m venv venv && \
./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
RUN ./venv/bin/ansible --version
RUN ./venv/bin/ansible-config dump --only-changed -t all
RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
```
### Expected Results
I expect ansible to determine that `ansible_pkg_manager` is dnf5 and use it as the backend for the `package` module.
### Actual Results
<details>
<summary>Logs</summary>
```console
$ buildah bud --squash
STEP 1/8: FROM registry.fedoraproject.org/fedora-minimal:38
STEP 2/8: WORKDIR /root
STEP 3/8: RUN dnf5 install -qy python3 python3-libdnf5
Package Arch Version Repository Size
Installing:
python3 x86_64 3.11.2-1.fc38 fedora 33.0 KiB
python3-libdnf5 x86_64 5.0.6-2.fc38 fedora 5.4 MiB
Installing dependencies:
expat x86_64 2.5.0-2.fc38 fedora 276.0 KiB
libb2 x86_64 0.98.1-8.fc38 fedora 42.6 KiB
libgomp x86_64 13.0.1-0.8.fc38 fedora 481.7 KiB
libnsl2 x86_64 2.0.0-5.fc38 fedora 58.3 KiB
libtirpc x86_64 1.3.3-1.fc38 fedora 203.6 KiB
mpdecimal x86_64 2.5.1-6.fc38 fedora 202.2 KiB
python-pip-wheel noarch 22.3.1-2.fc38 fedora 1.5 MiB
python-setuptools-wheel noarch 65.5.1-2.fc38 fedora 860.3 KiB
python3-libs x86_64 3.11.2-1.fc38 fedora 44.2 MiB
Installing weak dependencies:
libxcrypt-compat x86_64 4.4.33-7.fc38 fedora 198.3 KiB
python-unversioned-command noarch 3.11.2-1.fc38 fedora 23.0 B
Transaction Summary:
Installing: 13 packages
Downloading Packages:
[ 1/13] expat-0:2.5.0-2.fc38.x86_64 100% | 307.8 KiB/s | 110.2 KiB | 00m00s
[ 2/13] libb2-0:0.98.1-8.fc38.x86_64 100% | 240.5 KiB/s | 25.5 KiB | 00m00s
[ 3/13] libnsl2-0:2.0.0-5.fc38.x86_64 100% | 763.9 KiB/s | 29.8 KiB | 00m00s
[ 4/13] python3-libdnf5-0:5.0.6-2.fc38. 100% | 2.0 MiB/s | 1.0 MiB | 00m01s
[ 5/13] mpdecimal-0:2.5.1-6.fc38.x86_64 100% | 1.7 MiB/s | 88.8 KiB | 00m00s
[ 6/13] libtirpc-0:1.3.3-1.fc38.x86_64 100% | 1.4 MiB/s | 93.8 KiB | 00m00s
[ 7/13] python-pip-wheel-0:22.3.1-2.fc3 100% | 6.6 MiB/s | 1.4 MiB | 00m00s
[ 8/13] python3-0:3.11.2-1.fc38.x86_64 100% | 441.9 KiB/s | 27.8 KiB | 00m00s
[ 9/13] python-setuptools-wheel-0:65.5. 100% | 2.1 MiB/s | 715.0 KiB | 00m00s
[10/13] libgomp-0:13.0.1-0.8.fc38.x86_6 100% | 4.0 MiB/s | 309.9 KiB | 00m00s
[11/13] libxcrypt-compat-0:4.4.33-7.fc3 100% | 1.0 MiB/s | 91.4 KiB | 00m00s
[12/13] python-unversioned-command-0:3. 100% | 85.2 KiB/s | 10.8 KiB | 00m00s
[13/13] python3-libs-0:3.11.2-1.fc38.x8 100% | 4.4 MiB/s | 9.6 MiB | 00m02s
--------------------------------------------------------------------------------
[13/13] Total 100% | 5.4 MiB/s | 13.5 MiB | 00m03s
Verifying PGP signatures
Running transaction
[1/2] Verify package files 100% | 111.0 B/s | 13.0 B | 00m00s
[2/3] Prepare transaction 100% | 232.0 B/s | 13.0 B | 00m00s
[3/4] Installing libtirpc-0:1.3.3-1.fc3 100% | 50.2 MiB/s | 205.4 KiB | 00m00s
[4/5] Installing libnsl2-0:2.0.0-5.fc38 100% | 29.0 MiB/s | 59.4 KiB | 00m00s
[5/6] Installing libxcrypt-compat-0:4.4 100% | 65.0 MiB/s | 199.7 KiB | 00m00s
[6/7] Installing python-pip-wheel-0:22. 100% | 209.7 MiB/s | 1.5 MiB | 00m00s
[7/8] Installing libgomp-0:13.0.1-0.8.f 100% | 117.9 MiB/s | 483.1 KiB | 00m00s
[8/9] Installing libb2-0:0.98.1-8.fc38. 100% | 21.3 MiB/s | 43.7 KiB | 00m00s
[ 9/10] Installing python-setuptools-wh 100% | 210.2 MiB/s | 861.0 KiB | 00m00s
[10/11] Installing mpdecimal-0:2.5.1-6. 100% | 66.2 MiB/s | 203.3 KiB | 00m00s
[11/12] Installing expat-0:2.5.0-2.fc38 100% | 67.9 MiB/s | 278.1 KiB | 00m00s
[12/13] Installing python-unversioned-c 100% | 414.1 KiB/s | 424.0 B | 00m00s
[13/14] Installing python3-0:3.11.2-1.f 100% | 3.8 MiB/s | 34.8 KiB | 00m00s
[14/15] Installing python3-libs-0:3.11. 100% | 104.1 MiB/s | 44.6 MiB | 00m00s
[15/15] Installing python3-libdnf5-0:5. 100% | 100.6 MiB/s | 5.4 MiB | 00m00s
>>> Running trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
>>> Stop trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
--------------------------------------------------------------------------------
[15/15] Total 100% | 57.7 MiB/s | 53.9 MiB | 00m01sSTEP 4/8: RUN python3 -m venv venv && ./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
Collecting https://github.com/ansible/ansible/archive/devel.tar.gz
Downloading https://github.com/ansible/ansible/archive/devel.tar.gz (10.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 10.7/10.7 MB 9.3 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting jinja2>=3.0.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
βββββββββββββββββββββββββββββββββββββββ 133.1/133.1 kB 3.6 MB/s eta 0:00:00
Collecting PyYAML>=5.1
Downloading PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (757 kB)
ββββββββββββββββββββββββββββββββββββββ 757.9/757.9 kB 14.9 MB/s eta 0:00:00
Collecting cryptography
Downloading cryptography-40.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.7/3.7 MB 30.5 MB/s eta 0:00:00
Collecting packaging
Downloading packaging-23.0-py3-none-any.whl (42 kB)
ββββββββββββββββββββββββββββββββββββββββ 42.7/42.7 kB 12.8 MB/s eta 0:00:00
Collecting resolvelib<1.1.0,>=0.5.3
Downloading resolvelib-1.0.1-py2.py3-none-any.whl (17 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (462 kB)
ββββββββββββββββββββββββββββββββββββββ 462.6/462.6 kB 33.3 MB/s eta 0:00:00
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
ββββββββββββββββββββββββββββββββββββββ 118.7/118.7 kB 22.7 MB/s eta 0:00:00
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (pyproject.toml): started
Building wheel for ansible-core (pyproject.toml): finished with status 'done'
Created wheel for ansible-core: filename=ansible_core-2.15.0.dev0-py3-none-any.whl size=2237665 sha256=b0bee73f1c388cb6bb4531b68269b96dc7b4664df6ee477dcb16c81124861c80
Stored in directory: /tmp/pip-ephem-wheel-cache-y_v7soeg/wheels/07/5c/8f/7df4e25e678a191c66d6e678537306fd465ebfbea902e9d6f1
Successfully built ansible-core
Installing collected packages: resolvelib, PyYAML, pycparser, packaging, MarkupSafe, jinja2, cffi, cryptography, ansible-core
Successfully installed MarkupSafe-2.1.2 PyYAML-6.0 ansible-core-2.15.0.dev0 cffi-1.15.1 cryptography-40.0.1 jinja2-3.1.2 packaging-23.0 pycparser-2.21 resolvelib-1.0.1
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: python3 -m pip install --upgrade pip
STEP 5/8: RUN ./venv/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
STEP 6/8: RUN ./venv/bin/ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
CONFIG_FILE() = None
STEP 7/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_pkg_mgr": "unknown",
STEP 8/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
localhost | FAILED! => {
"changed": false,
"msg": "Could not find a module for unknown."
}
Error: building at STEP "RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh": while running runtime: exit status 2
```
</details>
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80376
|
https://github.com/ansible/ansible/pull/80550
|
68e270d4cc2579e4659ed53aecbc5a3358b85985
|
748f534312f2073a25a87871f5bd05882891b8c4
| 2023-03-31T15:48:20Z |
python
| 2023-04-21T06:53:07Z |
changelogs/fragments/pkg_mgr-default-dnf.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,376 |
Package manager discovery makes incorrect assumptions about dnf availability
|
### Summary
fedora-minimal:38 minimal containers only contain microdnf which now points to dnf5. See https://fedoraproject.org/wiki/Changes/MajorUpgradeOfMicrodnf. The package manager discovery code added https://github.com/ansible/ansible/pull/80272/ only selects dnf5 on Fedora >= 39 and doesn't consider it otherwise. In the fedora-minimal:38 container, `ansible_pkg_manager` is set to `unknown` when it should be set to `dnf5`.
Package manager discovery should **prefer** dnf5 on Fedora 39, but it shouldn't ignore dnf4 on Fedora >= 39 if dnf5 is missing (users can exclude dnf5 from their systems if they'd like), and it shouldn't ignore dnf5 on Fedora < 39 when dnf4 is missing (the fedora-minimal:38 container is one example of where this assumption breaks).
### Issue Type
Bug Report
### Component Name
dnf5
lib/ansible/module_utils/facts/system/pkg_mgr.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable l\ocation = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Fedora 38 instance with dnf5 but not dnf
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```Dockerfile
# fedora-minimal only has microdnf, which has been replaced by dnf5
FROM registry.fedoraproject.org/fedora-minimal:38
WORKDIR /root
RUN dnf5 install -qy python3 python3-libdnf5
RUN python3 -m venv venv && \
./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
RUN ./venv/bin/ansible --version
RUN ./venv/bin/ansible-config dump --only-changed -t all
RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
```
### Expected Results
I expect ansible to determine that `ansible_pkg_manager` is dnf5 and use it as the backend for the `package` module.
### Actual Results
<details>
<summary>Logs</summary>
```console
$ buildah bud --squash
STEP 1/8: FROM registry.fedoraproject.org/fedora-minimal:38
STEP 2/8: WORKDIR /root
STEP 3/8: RUN dnf5 install -qy python3 python3-libdnf5
Package Arch Version Repository Size
Installing:
python3 x86_64 3.11.2-1.fc38 fedora 33.0 KiB
python3-libdnf5 x86_64 5.0.6-2.fc38 fedora 5.4 MiB
Installing dependencies:
expat x86_64 2.5.0-2.fc38 fedora 276.0 KiB
libb2 x86_64 0.98.1-8.fc38 fedora 42.6 KiB
libgomp x86_64 13.0.1-0.8.fc38 fedora 481.7 KiB
libnsl2 x86_64 2.0.0-5.fc38 fedora 58.3 KiB
libtirpc x86_64 1.3.3-1.fc38 fedora 203.6 KiB
mpdecimal x86_64 2.5.1-6.fc38 fedora 202.2 KiB
python-pip-wheel noarch 22.3.1-2.fc38 fedora 1.5 MiB
python-setuptools-wheel noarch 65.5.1-2.fc38 fedora 860.3 KiB
python3-libs x86_64 3.11.2-1.fc38 fedora 44.2 MiB
Installing weak dependencies:
libxcrypt-compat x86_64 4.4.33-7.fc38 fedora 198.3 KiB
python-unversioned-command noarch 3.11.2-1.fc38 fedora 23.0 B
Transaction Summary:
Installing: 13 packages
Downloading Packages:
[ 1/13] expat-0:2.5.0-2.fc38.x86_64 100% | 307.8 KiB/s | 110.2 KiB | 00m00s
[ 2/13] libb2-0:0.98.1-8.fc38.x86_64 100% | 240.5 KiB/s | 25.5 KiB | 00m00s
[ 3/13] libnsl2-0:2.0.0-5.fc38.x86_64 100% | 763.9 KiB/s | 29.8 KiB | 00m00s
[ 4/13] python3-libdnf5-0:5.0.6-2.fc38. 100% | 2.0 MiB/s | 1.0 MiB | 00m01s
[ 5/13] mpdecimal-0:2.5.1-6.fc38.x86_64 100% | 1.7 MiB/s | 88.8 KiB | 00m00s
[ 6/13] libtirpc-0:1.3.3-1.fc38.x86_64 100% | 1.4 MiB/s | 93.8 KiB | 00m00s
[ 7/13] python-pip-wheel-0:22.3.1-2.fc3 100% | 6.6 MiB/s | 1.4 MiB | 00m00s
[ 8/13] python3-0:3.11.2-1.fc38.x86_64 100% | 441.9 KiB/s | 27.8 KiB | 00m00s
[ 9/13] python-setuptools-wheel-0:65.5. 100% | 2.1 MiB/s | 715.0 KiB | 00m00s
[10/13] libgomp-0:13.0.1-0.8.fc38.x86_6 100% | 4.0 MiB/s | 309.9 KiB | 00m00s
[11/13] libxcrypt-compat-0:4.4.33-7.fc3 100% | 1.0 MiB/s | 91.4 KiB | 00m00s
[12/13] python-unversioned-command-0:3. 100% | 85.2 KiB/s | 10.8 KiB | 00m00s
[13/13] python3-libs-0:3.11.2-1.fc38.x8 100% | 4.4 MiB/s | 9.6 MiB | 00m02s
--------------------------------------------------------------------------------
[13/13] Total 100% | 5.4 MiB/s | 13.5 MiB | 00m03s
Verifying PGP signatures
Running transaction
[1/2] Verify package files 100% | 111.0 B/s | 13.0 B | 00m00s
[2/3] Prepare transaction 100% | 232.0 B/s | 13.0 B | 00m00s
[3/4] Installing libtirpc-0:1.3.3-1.fc3 100% | 50.2 MiB/s | 205.4 KiB | 00m00s
[4/5] Installing libnsl2-0:2.0.0-5.fc38 100% | 29.0 MiB/s | 59.4 KiB | 00m00s
[5/6] Installing libxcrypt-compat-0:4.4 100% | 65.0 MiB/s | 199.7 KiB | 00m00s
[6/7] Installing python-pip-wheel-0:22. 100% | 209.7 MiB/s | 1.5 MiB | 00m00s
[7/8] Installing libgomp-0:13.0.1-0.8.f 100% | 117.9 MiB/s | 483.1 KiB | 00m00s
[8/9] Installing libb2-0:0.98.1-8.fc38. 100% | 21.3 MiB/s | 43.7 KiB | 00m00s
[ 9/10] Installing python-setuptools-wh 100% | 210.2 MiB/s | 861.0 KiB | 00m00s
[10/11] Installing mpdecimal-0:2.5.1-6. 100% | 66.2 MiB/s | 203.3 KiB | 00m00s
[11/12] Installing expat-0:2.5.0-2.fc38 100% | 67.9 MiB/s | 278.1 KiB | 00m00s
[12/13] Installing python-unversioned-c 100% | 414.1 KiB/s | 424.0 B | 00m00s
[13/14] Installing python3-0:3.11.2-1.f 100% | 3.8 MiB/s | 34.8 KiB | 00m00s
[14/15] Installing python3-libs-0:3.11. 100% | 104.1 MiB/s | 44.6 MiB | 00m00s
[15/15] Installing python3-libdnf5-0:5. 100% | 100.6 MiB/s | 5.4 MiB | 00m00s
>>> Running trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
>>> Stop trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
--------------------------------------------------------------------------------
[15/15] Total 100% | 57.7 MiB/s | 53.9 MiB | 00m01sSTEP 4/8: RUN python3 -m venv venv && ./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
Collecting https://github.com/ansible/ansible/archive/devel.tar.gz
Downloading https://github.com/ansible/ansible/archive/devel.tar.gz (10.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 10.7/10.7 MB 9.3 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting jinja2>=3.0.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
βββββββββββββββββββββββββββββββββββββββ 133.1/133.1 kB 3.6 MB/s eta 0:00:00
Collecting PyYAML>=5.1
Downloading PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (757 kB)
ββββββββββββββββββββββββββββββββββββββ 757.9/757.9 kB 14.9 MB/s eta 0:00:00
Collecting cryptography
Downloading cryptography-40.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.7/3.7 MB 30.5 MB/s eta 0:00:00
Collecting packaging
Downloading packaging-23.0-py3-none-any.whl (42 kB)
ββββββββββββββββββββββββββββββββββββββββ 42.7/42.7 kB 12.8 MB/s eta 0:00:00
Collecting resolvelib<1.1.0,>=0.5.3
Downloading resolvelib-1.0.1-py2.py3-none-any.whl (17 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (462 kB)
ββββββββββββββββββββββββββββββββββββββ 462.6/462.6 kB 33.3 MB/s eta 0:00:00
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
ββββββββββββββββββββββββββββββββββββββ 118.7/118.7 kB 22.7 MB/s eta 0:00:00
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (pyproject.toml): started
Building wheel for ansible-core (pyproject.toml): finished with status 'done'
Created wheel for ansible-core: filename=ansible_core-2.15.0.dev0-py3-none-any.whl size=2237665 sha256=b0bee73f1c388cb6bb4531b68269b96dc7b4664df6ee477dcb16c81124861c80
Stored in directory: /tmp/pip-ephem-wheel-cache-y_v7soeg/wheels/07/5c/8f/7df4e25e678a191c66d6e678537306fd465ebfbea902e9d6f1
Successfully built ansible-core
Installing collected packages: resolvelib, PyYAML, pycparser, packaging, MarkupSafe, jinja2, cffi, cryptography, ansible-core
Successfully installed MarkupSafe-2.1.2 PyYAML-6.0 ansible-core-2.15.0.dev0 cffi-1.15.1 cryptography-40.0.1 jinja2-3.1.2 packaging-23.0 pycparser-2.21 resolvelib-1.0.1
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: python3 -m pip install --upgrade pip
STEP 5/8: RUN ./venv/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
STEP 6/8: RUN ./venv/bin/ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
CONFIG_FILE() = None
STEP 7/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_pkg_mgr": "unknown",
STEP 8/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
localhost | FAILED! => {
"changed": false,
"msg": "Could not find a module for unknown."
}
Error: building at STEP "RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh": while running runtime: exit status 2
```
</details>
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80376
|
https://github.com/ansible/ansible/pull/80550
|
68e270d4cc2579e4659ed53aecbc5a3358b85985
|
748f534312f2073a25a87871f5bd05882891b8c4
| 2023-03-31T15:48:20Z |
python
| 2023-04-21T06:53:07Z |
lib/ansible/module_utils/facts/system/pkg_mgr.py
|
# Collect facts related to the system package manager
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import subprocess
import ansible.module_utils.compat.typing as t
from ansible.module_utils.facts.collector import BaseFactCollector
# A list of dicts. If there is a platform with more than one
# package manager, put the preferred one last. If there is an
# ansible module, use that as the value for the 'name' key.
PKG_MGRS = [{'path': '/usr/bin/rpm-ostree', 'name': 'atomic_container'},
{'path': '/usr/bin/yum', 'name': 'yum'},
{'path': '/usr/bin/dnf', 'name': 'dnf'},
{'path': '/usr/bin/apt-get', 'name': 'apt'},
{'path': '/usr/bin/zypper', 'name': 'zypper'},
{'path': '/usr/sbin/urpmi', 'name': 'urpmi'},
{'path': '/usr/bin/pacman', 'name': 'pacman'},
{'path': '/bin/opkg', 'name': 'opkg'},
{'path': '/usr/pkg/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/local/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/tools/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/local/bin/port', 'name': 'macports'},
{'path': '/usr/local/bin/brew', 'name': 'homebrew'},
{'path': '/opt/homebrew/bin/brew', 'name': 'homebrew'},
{'path': '/sbin/apk', 'name': 'apk'},
{'path': '/usr/sbin/pkg', 'name': 'pkgng'},
{'path': '/usr/sbin/swlist', 'name': 'swdepot'},
{'path': '/usr/bin/emerge', 'name': 'portage'},
{'path': '/usr/sbin/pkgadd', 'name': 'svr4pkg'},
{'path': '/usr/bin/pkg', 'name': 'pkg5'},
{'path': '/usr/bin/xbps-install', 'name': 'xbps'},
{'path': '/usr/local/sbin/pkg', 'name': 'pkgng'},
{'path': '/usr/bin/swupd', 'name': 'swupd'},
{'path': '/usr/sbin/sorcery', 'name': 'sorcery'},
{'path': '/usr/bin/installp', 'name': 'installp'},
{'path': '/QOpenSys/pkgs/bin/yum', 'name': 'yum'},
]
class OpenBSDPkgMgrFactCollector(BaseFactCollector):
name = 'pkg_mgr'
_fact_ids = set() # type: t.Set[str]
_platform = 'OpenBSD'
def collect(self, module=None, collected_facts=None):
facts_dict = {}
facts_dict['pkg_mgr'] = 'openbsd_pkg'
return facts_dict
# the fact ends up being 'pkg_mgr' so stick with that naming/spelling
class PkgMgrFactCollector(BaseFactCollector):
name = 'pkg_mgr'
_fact_ids = set() # type: t.Set[str]
_platform = 'Generic'
required_facts = set(['distribution'])
def _pkg_mgr_exists(self, pkg_mgr_name):
for cur_pkg_mgr in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == pkg_mgr_name]:
if os.path.exists(cur_pkg_mgr['path']):
return pkg_mgr_name
def _check_rh_versions(self, pkg_mgr_name, collected_facts):
if os.path.exists('/run/ostree-booted'):
return "atomic_container"
if collected_facts['ansible_distribution'] == 'Fedora':
try:
if int(collected_facts['ansible_distribution_major_version']) < 23:
if self._pkg_mgr_exists('yum'):
pkg_mgr_name = 'yum'
elif int(collected_facts['ansible_distribution_major_version']) >= 39:
# /usr/bin/dnf is planned to be a symlink to /usr/bin/dnf5
if self._pkg_mgr_exists('dnf'):
pkg_mgr_name = 'dnf5'
else:
if self._pkg_mgr_exists('dnf'):
pkg_mgr_name = 'dnf'
except ValueError:
# If there's some new magical Fedora version in the future,
# just default to dnf
pkg_mgr_name = 'dnf'
elif collected_facts['ansible_distribution'] == 'Amazon':
try:
if int(collected_facts['ansible_distribution_major_version']) < 2022:
if self._pkg_mgr_exists('yum'):
pkg_mgr_name = 'yum'
else:
if self._pkg_mgr_exists('dnf'):
pkg_mgr_name = 'dnf'
except ValueError:
pkg_mgr_name = 'dnf'
else:
# If it's not one of the above and it's Red Hat family of distros, assume
# RHEL or a clone. For versions of RHEL < 8 that Ansible supports, the
# vendor supported official package manager is 'yum' and in RHEL 8+
# (as far as we know at the time of this writing) it is 'dnf'.
# If anyone wants to force a non-official package manager then they
# can define a provider to either the package or yum action plugins.
if int(collected_facts['ansible_distribution_major_version']) < 8:
pkg_mgr_name = 'yum'
else:
pkg_mgr_name = 'dnf'
return pkg_mgr_name
def _check_apt_flavor(self, pkg_mgr_name):
# Check if '/usr/bin/apt' is APT-RPM or an ordinary (dpkg-based) APT.
# There's rpm package on Debian, so checking if /usr/bin/rpm exists
# is not enough. Instead ask RPM if /usr/bin/apt-get belongs to some
# RPM package.
rpm_query = '/usr/bin/rpm -q --whatprovides /usr/bin/apt-get'.split()
if os.path.exists('/usr/bin/rpm'):
with open(os.devnull, 'w') as null:
try:
subprocess.check_call(rpm_query, stdout=null, stderr=null)
pkg_mgr_name = 'apt_rpm'
except subprocess.CalledProcessError:
# No apt-get in RPM database. Looks like Debian/Ubuntu
# with rpm package installed
pkg_mgr_name = 'apt'
return pkg_mgr_name
def pkg_mgrs(self, collected_facts):
# Filter out the /usr/bin/pkg because on Altlinux it is actually the
# perl-Package (not Solaris package manager).
# Since the pkg5 takes precedence over apt, this workaround
# is required to select the suitable package manager on Altlinux.
if collected_facts['ansible_os_family'] == 'Altlinux':
return filter(lambda pkg: pkg['path'] != '/usr/bin/pkg', PKG_MGRS)
else:
return PKG_MGRS
def collect(self, module=None, collected_facts=None):
facts_dict = {}
collected_facts = collected_facts or {}
pkg_mgr_name = 'unknown'
for pkg in self.pkg_mgrs(collected_facts):
if os.path.exists(pkg['path']):
pkg_mgr_name = pkg['name']
# Handle distro family defaults when more than one package manager is
# installed or available to the distro, the ansible_fact entry should be
# the default package manager officially supported by the distro.
if collected_facts['ansible_os_family'] == "RedHat":
pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts)
elif collected_facts['ansible_os_family'] == 'Debian' and pkg_mgr_name != 'apt':
# It's possible to install yum, dnf, zypper, rpm, etc inside of
# Debian. Doing so does not mean the system wants to use them.
pkg_mgr_name = 'apt'
elif collected_facts['ansible_os_family'] == 'Altlinux':
if pkg_mgr_name == 'apt':
pkg_mgr_name = 'apt_rpm'
# Check if /usr/bin/apt-get is ordinary (dpkg-based) APT or APT-RPM
if pkg_mgr_name == 'apt':
pkg_mgr_name = self._check_apt_flavor(pkg_mgr_name)
facts_dict['pkg_mgr'] = pkg_mgr_name
return facts_dict
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,376 |
Package manager discovery makes incorrect assumptions about dnf availability
|
### Summary
fedora-minimal:38 minimal containers only contain microdnf which now points to dnf5. See https://fedoraproject.org/wiki/Changes/MajorUpgradeOfMicrodnf. The package manager discovery code added https://github.com/ansible/ansible/pull/80272/ only selects dnf5 on Fedora >= 39 and doesn't consider it otherwise. In the fedora-minimal:38 container, `ansible_pkg_manager` is set to `unknown` when it should be set to `dnf5`.
Package manager discovery should **prefer** dnf5 on Fedora 39, but it shouldn't ignore dnf4 on Fedora >= 39 if dnf5 is missing (users can exclude dnf5 from their systems if they'd like), and it shouldn't ignore dnf5 on Fedora < 39 when dnf4 is missing (the fedora-minimal:38 container is one example of where this assumption breaks).
### Issue Type
Bug Report
### Component Name
dnf5
lib/ansible/module_utils/facts/system/pkg_mgr.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable l\ocation = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Fedora 38 instance with dnf5 but not dnf
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```Dockerfile
# fedora-minimal only has microdnf, which has been replaced by dnf5
FROM registry.fedoraproject.org/fedora-minimal:38
WORKDIR /root
RUN dnf5 install -qy python3 python3-libdnf5
RUN python3 -m venv venv && \
./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
RUN ./venv/bin/ansible --version
RUN ./venv/bin/ansible-config dump --only-changed -t all
RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
```
### Expected Results
I expect ansible to determine that `ansible_pkg_manager` is dnf5 and use it as the backend for the `package` module.
### Actual Results
<details>
<summary>Logs</summary>
```console
$ buildah bud --squash
STEP 1/8: FROM registry.fedoraproject.org/fedora-minimal:38
STEP 2/8: WORKDIR /root
STEP 3/8: RUN dnf5 install -qy python3 python3-libdnf5
Package Arch Version Repository Size
Installing:
python3 x86_64 3.11.2-1.fc38 fedora 33.0 KiB
python3-libdnf5 x86_64 5.0.6-2.fc38 fedora 5.4 MiB
Installing dependencies:
expat x86_64 2.5.0-2.fc38 fedora 276.0 KiB
libb2 x86_64 0.98.1-8.fc38 fedora 42.6 KiB
libgomp x86_64 13.0.1-0.8.fc38 fedora 481.7 KiB
libnsl2 x86_64 2.0.0-5.fc38 fedora 58.3 KiB
libtirpc x86_64 1.3.3-1.fc38 fedora 203.6 KiB
mpdecimal x86_64 2.5.1-6.fc38 fedora 202.2 KiB
python-pip-wheel noarch 22.3.1-2.fc38 fedora 1.5 MiB
python-setuptools-wheel noarch 65.5.1-2.fc38 fedora 860.3 KiB
python3-libs x86_64 3.11.2-1.fc38 fedora 44.2 MiB
Installing weak dependencies:
libxcrypt-compat x86_64 4.4.33-7.fc38 fedora 198.3 KiB
python-unversioned-command noarch 3.11.2-1.fc38 fedora 23.0 B
Transaction Summary:
Installing: 13 packages
Downloading Packages:
[ 1/13] expat-0:2.5.0-2.fc38.x86_64 100% | 307.8 KiB/s | 110.2 KiB | 00m00s
[ 2/13] libb2-0:0.98.1-8.fc38.x86_64 100% | 240.5 KiB/s | 25.5 KiB | 00m00s
[ 3/13] libnsl2-0:2.0.0-5.fc38.x86_64 100% | 763.9 KiB/s | 29.8 KiB | 00m00s
[ 4/13] python3-libdnf5-0:5.0.6-2.fc38. 100% | 2.0 MiB/s | 1.0 MiB | 00m01s
[ 5/13] mpdecimal-0:2.5.1-6.fc38.x86_64 100% | 1.7 MiB/s | 88.8 KiB | 00m00s
[ 6/13] libtirpc-0:1.3.3-1.fc38.x86_64 100% | 1.4 MiB/s | 93.8 KiB | 00m00s
[ 7/13] python-pip-wheel-0:22.3.1-2.fc3 100% | 6.6 MiB/s | 1.4 MiB | 00m00s
[ 8/13] python3-0:3.11.2-1.fc38.x86_64 100% | 441.9 KiB/s | 27.8 KiB | 00m00s
[ 9/13] python-setuptools-wheel-0:65.5. 100% | 2.1 MiB/s | 715.0 KiB | 00m00s
[10/13] libgomp-0:13.0.1-0.8.fc38.x86_6 100% | 4.0 MiB/s | 309.9 KiB | 00m00s
[11/13] libxcrypt-compat-0:4.4.33-7.fc3 100% | 1.0 MiB/s | 91.4 KiB | 00m00s
[12/13] python-unversioned-command-0:3. 100% | 85.2 KiB/s | 10.8 KiB | 00m00s
[13/13] python3-libs-0:3.11.2-1.fc38.x8 100% | 4.4 MiB/s | 9.6 MiB | 00m02s
--------------------------------------------------------------------------------
[13/13] Total 100% | 5.4 MiB/s | 13.5 MiB | 00m03s
Verifying PGP signatures
Running transaction
[1/2] Verify package files 100% | 111.0 B/s | 13.0 B | 00m00s
[2/3] Prepare transaction 100% | 232.0 B/s | 13.0 B | 00m00s
[3/4] Installing libtirpc-0:1.3.3-1.fc3 100% | 50.2 MiB/s | 205.4 KiB | 00m00s
[4/5] Installing libnsl2-0:2.0.0-5.fc38 100% | 29.0 MiB/s | 59.4 KiB | 00m00s
[5/6] Installing libxcrypt-compat-0:4.4 100% | 65.0 MiB/s | 199.7 KiB | 00m00s
[6/7] Installing python-pip-wheel-0:22. 100% | 209.7 MiB/s | 1.5 MiB | 00m00s
[7/8] Installing libgomp-0:13.0.1-0.8.f 100% | 117.9 MiB/s | 483.1 KiB | 00m00s
[8/9] Installing libb2-0:0.98.1-8.fc38. 100% | 21.3 MiB/s | 43.7 KiB | 00m00s
[ 9/10] Installing python-setuptools-wh 100% | 210.2 MiB/s | 861.0 KiB | 00m00s
[10/11] Installing mpdecimal-0:2.5.1-6. 100% | 66.2 MiB/s | 203.3 KiB | 00m00s
[11/12] Installing expat-0:2.5.0-2.fc38 100% | 67.9 MiB/s | 278.1 KiB | 00m00s
[12/13] Installing python-unversioned-c 100% | 414.1 KiB/s | 424.0 B | 00m00s
[13/14] Installing python3-0:3.11.2-1.f 100% | 3.8 MiB/s | 34.8 KiB | 00m00s
[14/15] Installing python3-libs-0:3.11. 100% | 104.1 MiB/s | 44.6 MiB | 00m00s
[15/15] Installing python3-libdnf5-0:5. 100% | 100.6 MiB/s | 5.4 MiB | 00m00s
>>> Running trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
>>> Stop trigger-install scriptlet: glibc-common-0:2.37-1.fc38.x86_64
--------------------------------------------------------------------------------
[15/15] Total 100% | 57.7 MiB/s | 53.9 MiB | 00m01sSTEP 4/8: RUN python3 -m venv venv && ./venv/bin/pip install 'https://github.com/ansible/ansible/archive/devel.tar.gz'
Collecting https://github.com/ansible/ansible/archive/devel.tar.gz
Downloading https://github.com/ansible/ansible/archive/devel.tar.gz (10.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 10.7/10.7 MB 9.3 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting jinja2>=3.0.0
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
βββββββββββββββββββββββββββββββββββββββ 133.1/133.1 kB 3.6 MB/s eta 0:00:00
Collecting PyYAML>=5.1
Downloading PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (757 kB)
ββββββββββββββββββββββββββββββββββββββ 757.9/757.9 kB 14.9 MB/s eta 0:00:00
Collecting cryptography
Downloading cryptography-40.0.1-cp36-abi3-manylinux_2_28_x86_64.whl (3.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.7/3.7 MB 30.5 MB/s eta 0:00:00
Collecting packaging
Downloading packaging-23.0-py3-none-any.whl (42 kB)
ββββββββββββββββββββββββββββββββββββββββ 42.7/42.7 kB 12.8 MB/s eta 0:00:00
Collecting resolvelib<1.1.0,>=0.5.3
Downloading resolvelib-1.0.1-py2.py3-none-any.whl (17 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (27 kB)
Collecting cffi>=1.12
Downloading cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (462 kB)
ββββββββββββββββββββββββββββββββββββββ 462.6/462.6 kB 33.3 MB/s eta 0:00:00
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
ββββββββββββββββββββββββββββββββββββββ 118.7/118.7 kB 22.7 MB/s eta 0:00:00
Building wheels for collected packages: ansible-core
Building wheel for ansible-core (pyproject.toml): started
Building wheel for ansible-core (pyproject.toml): finished with status 'done'
Created wheel for ansible-core: filename=ansible_core-2.15.0.dev0-py3-none-any.whl size=2237665 sha256=b0bee73f1c388cb6bb4531b68269b96dc7b4664df6ee477dcb16c81124861c80
Stored in directory: /tmp/pip-ephem-wheel-cache-y_v7soeg/wheels/07/5c/8f/7df4e25e678a191c66d6e678537306fd465ebfbea902e9d6f1
Successfully built ansible-core
Installing collected packages: resolvelib, PyYAML, pycparser, packaging, MarkupSafe, jinja2, cffi, cryptography, ansible-core
Successfully installed MarkupSafe-2.1.2 PyYAML-6.0 ansible-core-2.15.0.dev0 cffi-1.15.1 cryptography-40.0.1 jinja2-3.1.2 packaging-23.0 pycparser-2.21 resolvelib-1.0.1
[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: python3 -m pip install --upgrade pip
STEP 5/8: RUN ./venv/bin/ansible --version
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
ansible [core 2.15.0.dev0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = ./venv/bin/ansible
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] (/root/venv/bin/python3)
jinja version = 3.1.2
libyaml = True
STEP 6/8: RUN ./venv/bin/ansible-config dump --only-changed -t all
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
CONFIG_FILE() = None
STEP 7/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m setup | grep pkg
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
"ansible_pkg_mgr": "unknown",
STEP 8/8: RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh
[WARNING]: You are running the development version of Ansible. You should only
run Ansible from "devel" if you are modifying the Ansible engine, or trying out
features under development. This is a rapidly changing source of code and can
become unstable at any point.
[WARNING]: Unable to parse /root/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoneType: None
localhost | FAILED! => {
"changed": false,
"msg": "Could not find a module for unknown."
}
Error: building at STEP "RUN ./venv/bin/ansible -i localhost -c local localhost -m package -a name=zsh": while running runtime: exit status 2
```
</details>
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80376
|
https://github.com/ansible/ansible/pull/80550
|
68e270d4cc2579e4659ed53aecbc5a3358b85985
|
748f534312f2073a25a87871f5bd05882891b8c4
| 2023-03-31T15:48:20Z |
python
| 2023-04-21T06:53:07Z |
test/units/module_utils/facts/system/test_pkg_mgr.py
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.