status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
changelogs/fragments/ansible-test-minimum-setuptools.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
hacking/update-sanity-requirements.py
|
#!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK
"""Generate frozen sanity test requirements from source requirements files."""
from __future__ import annotations
import argparse
import dataclasses
import pathlib
import subprocess
import tempfile
import typing as t
import venv
try:
import argcomplete
except ImportError:
argcomplete = None
FILE = pathlib.Path(__file__).resolve()
ROOT = FILE.parent.parent
SELF = FILE.relative_to(ROOT)
@dataclasses.dataclass(frozen=True)
class SanityTest:
name: str
requirements_path: pathlib.Path
source_path: pathlib.Path
def freeze_requirements(self) -> None:
with tempfile.TemporaryDirectory() as venv_dir:
venv.create(venv_dir, with_pip=True)
python = pathlib.Path(venv_dir, 'bin', 'python')
pip = [python, '-m', 'pip', '--disable-pip-version-check']
env = dict()
pip_freeze = subprocess.run(pip + ['freeze'], env=env, check=True, capture_output=True, text=True)
if pip_freeze.stdout:
raise Exception(f'Initial virtual environment is not empty:\n{pip_freeze.stdout}')
subprocess.run(pip + ['install', 'wheel'], env=env, check=True) # make bdist_wheel available during pip install
subprocess.run(pip + ['install', '-r', self.source_path], env=env, check=True)
pip_freeze = subprocess.run(pip + ['freeze'], env=env, check=True, capture_output=True, text=True)
requirements = f'# edit "{self.source_path.name}" and generate with: {SELF} --test {self.name}\n{pip_freeze.stdout}'
with open(self.requirements_path, 'w') as requirement_file:
requirement_file.write(requirements)
@staticmethod
def create(path: pathlib.Path) -> SanityTest:
return SanityTest(
name=path.stem.replace('sanity.', '').replace('.requirements', ''),
requirements_path=path,
source_path=path.with_suffix('.in'),
)
def main() -> None:
tests = find_tests()
parser = argparse.ArgumentParser()
parser.add_argument(
'--test',
metavar='TEST',
dest='test_names',
action='append',
choices=[test.name for test in tests],
help='test requirements to update'
)
if argcomplete:
argcomplete.autocomplete(parser)
args = parser.parse_args()
test_names: set[str] = set(args.test_names or [])
tests = [test for test in tests if test.name in test_names] if test_names else tests
for test in tests:
print(f'===[ {test.name} ]===', flush=True)
test.freeze_requirements()
def find_tests() -> t.List[SanityTest]:
globs = (
'test/lib/ansible_test/_data/requirements/sanity.*.txt',
'test/sanity/code-smell/*.requirements.txt',
)
tests: t.List[SanityTest] = []
for glob in globs:
tests.extend(get_tests(pathlib.Path(glob)))
return sorted(tests, key=lambda test: test.name)
def get_tests(glob: pathlib.Path) -> t.List[SanityTest]:
path = pathlib.Path(ROOT, glob.parent)
pattern = glob.name
return [SanityTest.create(item) for item in path.glob(pattern)]
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
pyproject.toml
|
[build-system]
requires = ["setuptools >= 39.2.0"]
backend-path = ["packaging"] # requires 'Pip>=20' or 'pep517>=0.6.0'
build-backend = "pep517_backend.hooks" # wraps `setuptools.build_meta`
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
setup.cfg
|
# Minimum target setuptools 39.2.0
[metadata]
name = ansible-core
version = attr: ansible.release.__version__
description = Radically simple IT automation
long_description = file: README.rst
long_description_content_type = text/x-rst
author = Ansible, Inc.
author_email = [email protected]
url = https://ansible.com/
project_urls =
Bug Tracker=https://github.com/ansible/ansible/issues
CI: Azure Pipelines=https://dev.azure.com/ansible/ansible/
Code of Conduct=https://docs.ansible.com/ansible/latest/community/code_of_conduct.html
Documentation=https://docs.ansible.com/ansible-core/
Mailing lists=https://docs.ansible.com/ansible/latest/community/communication.html#mailing-list-information
Source Code=https://github.com/ansible/ansible
license = GPLv3+
classifiers =
Development Status :: 5 - Production/Stable
Environment :: Console
Intended Audience :: Developers
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
Natural Language :: English
Operating System :: POSIX
Programming Language :: Python :: 3
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Programming Language :: Python :: 3 :: Only
Topic :: System :: Installation/Setup
Topic :: System :: Systems Administration
Topic :: Utilities
[options]
zip_safe = False
python_requires = >=3.9
include_package_data = True
# keep ansible-test as a verbatim script to work with editable installs, since it needs to do its
# own package redirection magic that's beyond the scope of the normal `ansible` path redirection
# done by setuptools `develop`
scripts =
bin/ansible-test
# setuptools 51.0.0
# [options.entry_points]
# console_scripts =
# ansible = ansible.cli.adhoc:main
# ansible-config = ansible.cli.config:main
# ansible-console = ansible.cli.console:main
# ansible-doc = ansible.cli.doc:main
# ansible-galaxy = ansible.cli.galaxy:main
# ansible-inventory = ansible.cli.inventory:main
# ansible-playbook = ansible.cli.playbook:main
# ansible-pull = ansible.cli.pull:main
# ansible-vault = ansible.cli.vault:main
# ansible-connection = ansible.cli.scripts.ansible_connection_cli_stub:main
# ansible-test = ansible_test._util.target.cli.ansible_test_cli_stub:main
[flake8]
max-line-length = 160
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
test/sanity/code-smell/package-data.py
|
from __future__ import annotations
import contextlib
import fnmatch
import glob
import os
import pathlib
import re
import shutil
import subprocess
import sys
import tarfile
import tempfile
import packaging.version
from ansible.release import __version__
def assemble_files_to_ship(complete_file_list):
"""
This looks for all files which should be shipped in the sdist
"""
# All files which are in the repository except these:
ignore_patterns = (
# Developer-only tools
'.azure-pipelines/*',
'.github/*',
'.github/*/*',
'changelogs/fragments/*',
'hacking/backport/*',
'hacking/azp/*',
'hacking/tests/*',
'hacking/ticket_stubs/*',
'test/sanity/code-smell/botmeta.*',
'test/sanity/code-smell/release-names.*',
'test/results/.tmp/*',
'test/results/.tmp/*/*',
'test/results/.tmp/*/*/*',
'test/results/.tmp/*/*/*/*',
'test/results/.tmp/*/*/*/*/*',
'.git*',
)
ignore_files = frozenset((
# Developer-only tools
'changelogs/config.yaml',
'hacking/README.md',
'hacking/ansible-profile',
'hacking/cgroup_perf_recap_graph.py',
'hacking/create_deprecated_issues.py',
'hacking/deprecated_issue_template.md',
'hacking/create-bulk-issues.py',
'hacking/fix_test_syntax.py',
'hacking/get_library.py',
'hacking/metadata-tool.py',
'hacking/report.py',
'hacking/return_skeleton_generator.py',
'hacking/test-module',
'test/lib/ansible_test/_internal/commands/sanity/bin_symlinks.py',
'test/lib/ansible_test/_internal/commands/sanity/integration_aliases.py',
'.cherry_picker.toml',
'.mailmap',
# Generated as part of a build step
'docs/docsite/rst/conf.py',
'docs/docsite/rst/index.rst',
'docs/docsite/rst/dev_guide/index.rst',
# Possibly should be included
'examples/scripts/uptime.py',
'examples/scripts/my_test.py',
'examples/scripts/my_test_info.py',
'examples/scripts/my_test_facts.py',
'examples/DOCUMENTATION.yml',
'examples/play.yml',
'examples/hosts.yaml',
'examples/hosts.yml',
'examples/inventory_script_schema.json',
'examples/plugin_filters.yml',
'hacking/env-setup',
'hacking/env-setup.fish',
'MANIFEST',
'setup.cfg',
# docs for test files not included in sdist
'docs/docsite/rst/dev_guide/testing/sanity/bin-symlinks.rst',
'docs/docsite/rst/dev_guide/testing/sanity/botmeta.rst',
'docs/docsite/rst/dev_guide/testing/sanity/integration-aliases.rst',
'docs/docsite/rst/dev_guide/testing/sanity/release-names.rst',
))
# These files are generated and then intentionally added to the sdist
# Manpages
ignore_script = ('ansible-connection', 'ansible-test')
manpages = ['docs/man/man1/ansible.1']
for dirname, dummy, files in os.walk('bin'):
for filename in files:
if filename in ignore_script:
continue
manpages.append('docs/man/man1/%s.1' % filename)
# Misc
misc_generated_files = [
'PKG-INFO',
]
shipped_files = manpages + misc_generated_files
for path in complete_file_list:
if path not in ignore_files:
for ignore in ignore_patterns:
if fnmatch.fnmatch(path, ignore):
break
else:
shipped_files.append(path)
return shipped_files
def assemble_files_to_install(complete_file_list):
"""
This looks for all of the files which should show up in an installation of ansible
"""
ignore_patterns = (
# Tests excluded from sdist
'test/lib/ansible_test/_internal/commands/sanity/bin_symlinks.py',
'test/lib/ansible_test/_internal/commands/sanity/integration_aliases.py',
)
pkg_data_files = []
for path in complete_file_list:
if path.startswith("lib/ansible"):
prefix = 'lib'
elif path.startswith("test/lib/ansible_test"):
prefix = 'test/lib'
else:
continue
for ignore in ignore_patterns:
if fnmatch.fnmatch(path, ignore):
break
else:
pkg_data_files.append(os.path.relpath(path, prefix))
return pkg_data_files
@contextlib.contextmanager
def clean_repository(file_list):
"""Copy the repository to clean it of artifacts"""
# Create a tempdir that will be the clean repo
with tempfile.TemporaryDirectory() as repo_root:
directories = set((repo_root + os.path.sep,))
for filename in file_list:
# Determine if we need to create the directory
directory = os.path.dirname(filename)
dest_dir = os.path.join(repo_root, directory)
if dest_dir not in directories:
os.makedirs(dest_dir)
# Keep track of all the directories that now exist
path_components = directory.split(os.path.sep)
path = repo_root
for component in path_components:
path = os.path.join(path, component)
if path not in directories:
directories.add(path)
# Copy the file
shutil.copy2(filename, dest_dir, follow_symlinks=False)
yield repo_root
def create_sdist(tmp_dir):
"""Create an sdist in the repository"""
# Make sure a changelog exists for this version when testing from devel.
# When testing from a stable branch the changelog will already exist.
version = packaging.version.Version(__version__)
pathlib.Path(f'changelogs/CHANGELOG-v{version.major}.{version.minor}.rst').touch()
create = subprocess.run(
[sys.executable, '-m', 'build', '--sdist', '--no-isolation', '--config-setting=--build-manpages', '--outdir', tmp_dir],
stdin=subprocess.DEVNULL,
capture_output=True,
text=True,
check=False,
)
stderr = create.stderr
if create.returncode != 0:
raise Exception('make snapshot failed:\n%s' % stderr)
# Determine path to sdist
tmp_dir_files = os.listdir(tmp_dir)
if not tmp_dir_files:
raise Exception('sdist was not created in the temp dir')
elif len(tmp_dir_files) > 1:
raise Exception('Unexpected extra files in the temp dir')
return os.path.join(tmp_dir, tmp_dir_files[0])
def extract_sdist(sdist_path, tmp_dir):
"""Untar the sdist"""
# Untar the sdist from the tmp_dir
with tarfile.open(os.path.join(tmp_dir, sdist_path), 'r|*') as sdist:
sdist.extractall(path=tmp_dir)
# Determine the sdist directory name
sdist_filename = os.path.basename(sdist_path)
tmp_dir_files = os.listdir(tmp_dir)
try:
tmp_dir_files.remove(sdist_filename)
except ValueError:
# Unexpected could not find original sdist in temp dir
raise
if len(tmp_dir_files) > 1:
raise Exception('Unexpected extra files in the temp dir')
elif len(tmp_dir_files) < 1:
raise Exception('sdist extraction did not occur i nthe temp dir')
return os.path.join(tmp_dir, tmp_dir_files[0])
def install_sdist(tmp_dir, sdist_dir):
"""Install the extracted sdist into the temporary directory"""
install = subprocess.run(
['python', 'setup.py', 'install', '--root=%s' % tmp_dir],
stdin=subprocess.DEVNULL,
capture_output=True,
text=True,
cwd=os.path.join(tmp_dir, sdist_dir),
check=False,
)
stdout, stderr = install.stdout, install.stderr
if install.returncode != 0:
raise Exception('sdist install failed:\n%s' % stderr)
# Determine the prefix for the installed files
match = re.search('^copying .* -> (%s/.*?/(?:site|dist)-packages)/ansible$' %
tmp_dir, stdout, flags=re.M)
return match.group(1)
def check_sdist_contains_expected(sdist_dir, to_ship_files):
"""Check that the files we expect to ship are present in the sdist"""
results = []
for filename in to_ship_files:
path = os.path.join(sdist_dir, filename)
if not os.path.exists(path):
results.append('%s: File was not added to sdist' % filename)
# Also changelog
changelog_files = glob.glob(os.path.join(sdist_dir, 'changelogs/CHANGELOG-v2.[0-9]*.rst'))
if not changelog_files:
results.append('changelogs/CHANGELOG-v2.*.rst: Changelog file was not added to the sdist')
elif len(changelog_files) > 1:
results.append('changelogs/CHANGELOG-v2.*.rst: Too many changelog files: %s'
% changelog_files)
return results
def check_sdist_files_are_wanted(sdist_dir, to_ship_files):
"""Check that all files in the sdist are desired"""
results = []
for dirname, dummy, files in os.walk(sdist_dir):
dirname = os.path.relpath(dirname, start=sdist_dir)
if dirname == '.':
dirname = ''
for filename in files:
if filename == 'setup.cfg':
continue
path = os.path.join(dirname, filename)
if path not in to_ship_files:
if fnmatch.fnmatch(path, 'changelogs/CHANGELOG-v2.[0-9]*.rst'):
# changelog files are expected
continue
if fnmatch.fnmatch(path, 'lib/ansible_core.egg-info/*'):
continue
# FIXME: ansible-test doesn't pass the paths of symlinks to us so we aren't
# checking those
if not os.path.islink(os.path.join(sdist_dir, path)):
results.append('%s: File in sdist was not in the repository' % path)
return results
def check_installed_contains_expected(install_dir, to_install_files):
"""Check that all the files we expect to be installed are"""
results = []
for filename in to_install_files:
path = os.path.join(install_dir, filename)
if not os.path.exists(path):
results.append('%s: File not installed' % os.path.join('lib', filename))
return results
EGG_RE = re.compile('ansible[^/]+\\.egg-info/(PKG-INFO|SOURCES.txt|'
'dependency_links.txt|not-zip-safe|requires.txt|top_level.txt|entry_points.txt)$')
def check_installed_files_are_wanted(install_dir, to_install_files):
"""Check that all installed files were desired"""
results = []
for dirname, dummy, files in os.walk(install_dir):
dirname = os.path.relpath(dirname, start=install_dir)
if dirname == '.':
dirname = ''
for filename in files:
# If this is a byte code cache, look for the python file's name
directory = dirname
if filename.endswith('.pyc') or filename.endswith('.pyo'):
# Remove the trailing "o" or c"
filename = filename[:-1]
if directory.endswith('%s__pycache__' % os.path.sep):
# Python3 byte code cache, look for the basename of
# __pycache__/__init__.cpython-36.py
segments = filename.rsplit('.', 2)
if len(segments) >= 3:
filename = '.'.join((segments[0], segments[2]))
directory = os.path.dirname(directory)
path = os.path.join(directory, filename)
# Test that the file was listed for installation
if path not in to_install_files:
# FIXME: ansible-test doesn't pass the paths of symlinks to us so we
# aren't checking those
if not os.path.islink(os.path.join(install_dir, path)):
if not EGG_RE.match(path):
results.append('%s: File was installed but was not supposed to be' % path)
return results
def _find_symlinks():
symlink_list = []
for dirname, directories, filenames in os.walk('.'):
for filename in filenames:
path = os.path.join(dirname, filename)
# Strip off "./" from the front
path = path[2:]
if os.path.islink(path):
symlink_list.append(path)
return symlink_list
def main():
"""All of the files in the repository"""
complete_file_list = []
for path in sys.argv[1:] or sys.stdin.read().splitlines():
complete_file_list.append(path)
# ansible-test isn't currently passing symlinks to us so construct those ourselves for now
for filename in _find_symlinks():
if filename not in complete_file_list:
# For some reason ansible-test is passing us lib/ansible/module_utils/ansible_release.py
# which is a symlink even though it doesn't pass any others
complete_file_list.append(filename)
# We may run this after docs sanity tests so get a clean repository to run in
with clean_repository(complete_file_list) as clean_repo_dir:
os.chdir(clean_repo_dir)
to_ship_files = assemble_files_to_ship(complete_file_list)
to_install_files = assemble_files_to_install(complete_file_list)
results = []
with tempfile.TemporaryDirectory() as tmp_dir:
sdist_path = create_sdist(tmp_dir)
sdist_dir = extract_sdist(sdist_path, tmp_dir)
# Check that the files that are supposed to be in the sdist are there
results.extend(check_sdist_contains_expected(sdist_dir, to_ship_files))
# Check that the files that are in the sdist are in the repository
results.extend(check_sdist_files_are_wanted(sdist_dir, to_ship_files))
# install the sdist
install_dir = install_sdist(tmp_dir, sdist_dir)
# Check that the files that are supposed to be installed are there
results.extend(check_installed_contains_expected(install_dir, to_install_files))
# Check that the files that are installed are supposed to be installed
results.extend(check_installed_files_are_wanted(install_dir, to_install_files))
for message in results:
print(message)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
test/sanity/code-smell/package-data.requirements.in
|
build
docutils < 0.18 # match version required by sphinx in the docs-build sanity test
jinja2
pyyaml # ansible-core requirement
resolvelib < 1.1.0
rstcheck < 6 # match version used in other sanity tests
antsibull-changelog
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,121 |
Bump setuptools constraint to 45.2.0
|
### Summary
Our setuptools constrain in `pyproject.toml` (listed also in `setup.cfg`) is set to `>= 39.2.0` due to the minimum version available at the time on Ubuntu 18.04.
Now that we have a dependency on Python 3.9 as a minimum, that constraint no longer makes sense, as Python 3.9 is not available on 18.04, and the `dist-packages` `setuptools==39.2.0` compat between deadsnakes python3.9 is non-functional.
This would make Ubuntu 20.04 effectively the minimum supported controller. The `dist-packages` `setuptools==45.2.0` is functional with the `python3.9` package, and works fine with our packaging.
This doesn't really grant us anything, since we're waiting for the minimum to eventually be `51.0.0` so we can list the `entry_points` in `pyproject.toml`, but no need to list effectively a non-functional lower bounds.
### Issue Type
Feature Idea
### Component Name
pyproject.toml
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80121
|
https://github.com/ansible/ansible/pull/80649
|
ece8da71ea48d37eb2ffaf4e1574b9641862344a
|
4d25e3d54f7de316c4f1d1575d2cf1ffa46b632c
| 2023-03-01T18:12:26Z |
python
| 2023-04-26T21:01:56Z |
test/sanity/code-smell/package-data.requirements.txt
|
# edit "package-data.requirements.in" and generate with: hacking/update-sanity-requirements.py --test package-data
antsibull-changelog==0.19.0
build==0.10.0
docutils==0.17.1
Jinja2==3.1.2
MarkupSafe==2.1.2
packaging==23.0
pyproject_hooks==1.0.0
PyYAML==6.0
resolvelib==1.0.1
rstcheck==5.0.0
semantic-version==2.10.0
tomli==2.0.1
types-docutils==0.18.3
typing_extensions==4.5.0
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,140 |
Terminal plugin documented command doesn't work
|
### Summary
Reading the terminal plugin documentation [here](https://docs.ansible.com/ansible/latest/plugins/terminal.html#viewing-terminal-plugins) and it suggest running the command:
`ansible-doc -t terminal -l`
and it returns the following error:
`ansible-doc: error: argument -t/--type: invalid choice: 'terminal' (choose from 'become', 'cache', 'callback', 'cliconf', 'connection', 'httpapi', 'inventory', 'lookup', 'netconf', 'shell', 'vars', 'module', 'strategy', 'test', 'filter', 'role', 'keyword')`
instead of listing the terminal plugins.
I am running Ansible 2.14.1
### Issue Type
Documentation Report
### Component Name
ansible/docs/docsite/rst/plugins/terminal.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.1]
config file = /home/ubuntu/git/aws-efs/ansible/ansible.cfg
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/ubuntu/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ubuntu/.local/bin/ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE SSH:~/ansible$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/ubuntu/git/aws-efs/ansible/ansible.cfg
DEFAULT_HOST_LIST(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = ['/home/ubuntu/git/aws-efs/ansible/hosts.yml']
DEFAULT_LOG_PATH(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = /home/ubuntu/git/aws-efs/ansible/logs/ansible.log
DEFAULT_ROLES_PATH(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = ['/home/ubuntu/git/aws-efs/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = debug
DISPLAY_ARGS_TO_STDOUT(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = True
VARS:
====
host_group_vars:
_______________
stage(/home/ubuntu/git/aws-efs/ansible/ansible.cfg) = all
```
### OS / Environment
Ubuntu 22.04
### Additional Information
Looking to see which terminal plugins are available
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80140
|
https://github.com/ansible/ansible/pull/80655
|
d18d4f84ecb28547220642b39bde08fd47615f0d
|
058b722a54ed89e613287e1ac07d616863d7a14b
| 2023-03-05T16:32:05Z |
python
| 2023-04-27T19:14:23Z |
docs/docsite/rst/plugins/terminal.rst
|
.. _terminal_plugins:
Terminal plugins
================
.. contents::
:local:
:depth: 2
Terminal plugins contain information on how to prepare a particular network device's SSH shell is properly initialized to be used with Ansible. This typically includes disabling automatic paging, detecting errors in output, and enabling privileged mode if supported and required on the device.
These plugins correspond one-to-one to network device platforms. Ansible loads the appropriate terminal plugin automatically based on the ``ansible_network_os`` variable.
.. _enabling_terminal:
Adding terminal plugins
-------------------------
You can extend Ansible to support other network devices by dropping a custom plugin into the ``terminal_plugins`` directory.
.. _using_terminal:
Using terminal plugins
------------------------
Ansible determines which terminal plugin to use automatically from the ``ansible_network_os`` variable. There should be no reason to override this functionality.
Terminal plugins operate without configuration. All options to control the terminal are exposed in the ``network_cli`` connection plugin.
Plugins are self-documenting. Each plugin should document its configuration options.
.. _terminal_plugin_list:
Viewing terminal plugins
------------------------
These plugins have migrated to collections on `Ansible Galaxy <https://galaxy.ansible.com>`_. If you installed Ansible version 2.10 or later using ``pip``, you have access to several terminal plugins. To list all available terminal plugins on your control node, type ``ansible-doc -t terminal -l``. To view plugin-specific documentation and examples, use ``ansible-doc -t terminal``.
.. seealso::
:ref:`Ansible for Network Automation<network_guide>`
An overview of using Ansible to automate networking devices.
:ref:`connection_plugins`
Connection plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.libera.chat <https://libera.chat/>`_
#ansible-network IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,648 |
galaxy collection caching mechanism fails to find available signed collection
|
### Summary
galaxy cli is able to find a collection on a remote api/v3 endpoint, but when it goes to install the collection version it fails to find it in the cache ...
```(Epdb) print(pid.stdout.decode("utf-8"))
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/tmp/pytest-of-
pulp/pytest-20/test_install_signed_collection0' is not part of the configured
Ansible collections paths
'/var/lib/pulp/.ansible/collections:/usr/share/ansible/collections'. The
installed collection won't be picked up in an Ansible run.
Process install dependency map
Initial connection to galaxy_server: http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api
Found API version 'v1, v2, v3' with Galaxy server pulp_ansible (http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api)
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/?limit=100
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/0.0.3/
Starting collection install process
ERROR! Unexpected Exception, this is probably a bug: The is no known source for testing.k8s_demo_collection:0.0.3
----------------DEBUG-------------------
collection: testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
cache: {<testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>: ('http://localhost:5001/pulp_ansible/galaxy/default/api/v3/plugin/ansible/content/4fae352f-2a64-4c8e-8b30-5fd6e1996408/collections/artifacts/testing-k8s_demo_collection-0.0.3.tar.gz', '360548ba80e3dce478b7915ca89a5613dc80c650a55b96f5491012a8297e12ac', <ansible.galaxy.token.BasicAuthToken object at 0x7fdf1408cac0>)}
testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
testing.k8s_demo_collection:0.0.3 == testing.k8s_demo_collection:0.0.3? False
----------------DEBUG-------------------
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 141, in get_galaxy_artifact_path
url, sha256_hash, token = self._galaxy_collection_cache[collection]
KeyError: <testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 682, in run
return context.CLIARGS['func']()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 104, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1327, in execute_install
self._execute_install_collection(
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1364, in _execute_install_collection
install_collections(
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 748, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1295, in install
b_artifact_path = (
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 149, in get_galaxy_artifact_path
raise_from(
File "<string>", line 3, in raise_from
RuntimeError: The is no known source for testing.k8s_demo_collection:0.0.3
```
The error comes from this block of code: **https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/concrete_artifact_manager.py#L139-L145**
I inspected the key in the dict and the collection variable and they are very similar but one has signing keys and the other does not.
```
1 {
2 'fqcn': 'testing.k8s_demo_collection',
3 'ver': '0.0.3',
4 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
5 'type': 'galaxy',
6 'signatures': frozenset()
7 }
8
9 {
10 'fqcn': 'testing.k8s_demo_collection',
11 'ver': '0.0.3',
12 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
13 'type': 'galaxy',
14 'signatures': frozenset({'-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEbt8wElZIC5uAHro9BaXm2iadnZgFAmRJd/AACgkQBaXm2iad\nnZhSdAf/QIm5AuYbgZ8Jxa/TcavRxoetQtgsspBBiDqvBP67BExN7xoBe/DUtjIA\nn2xbJgxzcwUI+WOYWE+iNjzjYpOBfN8jFlGMdAc21dfN+5NUvH+R0+YmwNf7Ihob\nd0qU3JozJZo+GCd2rMwprnzMp+3LvU9HD+r+hO9ELlMLQeYWVVn/ GBNrjZJ6yGlj\nBCGxvagEMhkp4Gso/ft5Q6VqFSWUrIERb9QZWKTnM7iryNO3ojcjBEvFdk+RuOho\nNN0rjN4Xu+DkbI3nUt49l+XC7yBubu9BBx30KcL1srrVI0nY6Px6LbLMnezg6C5+\nC7qsYP0E+41TQTbb7nxIELXvr/mP5g==\n=3jS8\n-----END PGP SIGNATURE-----\n'})
15 }
```
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Centos 8 stream docker image from pulp ci builds
### Steps to Reproduce
This is a bit nebulous because it's a failing job in the pulp_ansible CI. We do know that rolling back to 2.13.8 fixes it.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80648
|
https://github.com/ansible/ansible/pull/80661
|
71f6e10dae7c862f1e7f02063d4def18f0d44e44
|
d5e2e7a0a8ca9017a091922648430374539f878b
| 2023-04-26T20:29:54Z |
python
| 2023-04-27T20:11:17Z |
changelogs/fragments/80648-fix-ansible-galaxy-cache-signatures-bug.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,648 |
galaxy collection caching mechanism fails to find available signed collection
|
### Summary
galaxy cli is able to find a collection on a remote api/v3 endpoint, but when it goes to install the collection version it fails to find it in the cache ...
```(Epdb) print(pid.stdout.decode("utf-8"))
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/tmp/pytest-of-
pulp/pytest-20/test_install_signed_collection0' is not part of the configured
Ansible collections paths
'/var/lib/pulp/.ansible/collections:/usr/share/ansible/collections'. The
installed collection won't be picked up in an Ansible run.
Process install dependency map
Initial connection to galaxy_server: http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api
Found API version 'v1, v2, v3' with Galaxy server pulp_ansible (http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api)
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/?limit=100
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/0.0.3/
Starting collection install process
ERROR! Unexpected Exception, this is probably a bug: The is no known source for testing.k8s_demo_collection:0.0.3
----------------DEBUG-------------------
collection: testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
cache: {<testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>: ('http://localhost:5001/pulp_ansible/galaxy/default/api/v3/plugin/ansible/content/4fae352f-2a64-4c8e-8b30-5fd6e1996408/collections/artifacts/testing-k8s_demo_collection-0.0.3.tar.gz', '360548ba80e3dce478b7915ca89a5613dc80c650a55b96f5491012a8297e12ac', <ansible.galaxy.token.BasicAuthToken object at 0x7fdf1408cac0>)}
testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
testing.k8s_demo_collection:0.0.3 == testing.k8s_demo_collection:0.0.3? False
----------------DEBUG-------------------
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 141, in get_galaxy_artifact_path
url, sha256_hash, token = self._galaxy_collection_cache[collection]
KeyError: <testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 682, in run
return context.CLIARGS['func']()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 104, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1327, in execute_install
self._execute_install_collection(
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1364, in _execute_install_collection
install_collections(
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 748, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1295, in install
b_artifact_path = (
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 149, in get_galaxy_artifact_path
raise_from(
File "<string>", line 3, in raise_from
RuntimeError: The is no known source for testing.k8s_demo_collection:0.0.3
```
The error comes from this block of code: **https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/concrete_artifact_manager.py#L139-L145**
I inspected the key in the dict and the collection variable and they are very similar but one has signing keys and the other does not.
```
1 {
2 'fqcn': 'testing.k8s_demo_collection',
3 'ver': '0.0.3',
4 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
5 'type': 'galaxy',
6 'signatures': frozenset()
7 }
8
9 {
10 'fqcn': 'testing.k8s_demo_collection',
11 'ver': '0.0.3',
12 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
13 'type': 'galaxy',
14 'signatures': frozenset({'-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEbt8wElZIC5uAHro9BaXm2iadnZgFAmRJd/AACgkQBaXm2iad\nnZhSdAf/QIm5AuYbgZ8Jxa/TcavRxoetQtgsspBBiDqvBP67BExN7xoBe/DUtjIA\nn2xbJgxzcwUI+WOYWE+iNjzjYpOBfN8jFlGMdAc21dfN+5NUvH+R0+YmwNf7Ihob\nd0qU3JozJZo+GCd2rMwprnzMp+3LvU9HD+r+hO9ELlMLQeYWVVn/ GBNrjZJ6yGlj\nBCGxvagEMhkp4Gso/ft5Q6VqFSWUrIERb9QZWKTnM7iryNO3ojcjBEvFdk+RuOho\nNN0rjN4Xu+DkbI3nUt49l+XC7yBubu9BBx30KcL1srrVI0nY6Px6LbLMnezg6C5+\nC7qsYP0E+41TQTbb7nxIELXvr/mP5g==\n=3jS8\n-----END PGP SIGNATURE-----\n'})
15 }
```
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Centos 8 stream docker image from pulp ci builds
### Steps to Reproduce
This is a bit nebulous because it's a failing job in the pulp_ansible CI. We do know that rolling back to 2.13.8 fixes it.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80648
|
https://github.com/ansible/ansible/pull/80661
|
71f6e10dae7c862f1e7f02063d4def18f0d44e44
|
d5e2e7a0a8ca9017a091922648430374539f878b
| 2023-04-26T20:29:54Z |
python
| 2023-04-27T20:11:17Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import pathlib
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.compat.importlib_resources import files
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise AnsibleError(to_native(lookup_err)) from lookup_err
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
collection_meta['license_file'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
if concrete_coll_pin.type == 'galaxy':
concrete_coll_pin = concrete_coll_pin.with_signatures_repopulated()
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if not signatures and not collection.signature_sources:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns,
manifest_control, license_file):
# type: (bytes, str, str, list[str], dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
license_file,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control,
license_file):
# type: (bytes, str, str, dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst *.license COPYING LICENSE',
'recursive-include .reuse **',
'recursive-include LICENSES **',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license',
'recursive-include roles **.yml **.yaml **.json **.j2 **.license',
'recursive-include playbooks **.yml **.yaml **.json **.license',
'recursive-include changelogs **.yml **.yaml **.license',
'recursive-include plugins */**.py */**.license',
])
if license_file:
directives.append(f'include {license_file}')
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml **.license',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__ *.pyc *.pyo *.bak *~ *.swp',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file, follow_symlinks=False).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
# ensure symlinks to dirs are not translated to empty dirs
if os.path.isdir(src_file) and not os.path.islink(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
# do not follow symlinks to ensure the original link is used
shutil.copyfile(src_file, dest_file, follow_symlinks=False)
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
if not os.path.islink(dest_file):
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def _normalize_collection_path(path):
str_path = path.as_posix() if isinstance(path, pathlib.Path) else path
return pathlib.Path(
# This is annoying, but GalaxyCLI._resolve_path did it
os.path.expandvars(str_path)
).expanduser().absolute()
def find_existing_collections(path_filter, artifacts_manager, namespace_filter=None, collection_filter=None, dedupe=True):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
if files is None:
raise AnsibleError('importlib_resources is not installed and is required')
if path_filter and not is_sequence(path_filter):
path_filter = [path_filter]
paths = set()
for path in files('ansible_collections').glob('*/*/'):
path = _normalize_collection_path(path)
if not path.is_dir():
continue
if path_filter:
for pf in path_filter:
try:
path.relative_to(_normalize_collection_path(pf))
except ValueError:
continue
break
else:
continue
paths.add(path)
seen = set()
for path in paths:
namespace = path.parent.name
name = path.name
if namespace_filter and namespace != namespace_filter:
continue
if collection_filter and name != collection_filter:
continue
if dedupe:
try:
collection_path = files(f'ansible_collections.{namespace}.{name}')
except ImportError:
continue
if collection_path in seen:
continue
seen.add(collection_path)
else:
collection_path = path
b_collection_path = to_bytes(collection_path.as_posix())
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
display.warning(f'{val_err}')
continue
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_meta['manifest'] = Sentinel
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
collection_meta['license_file'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,648 |
galaxy collection caching mechanism fails to find available signed collection
|
### Summary
galaxy cli is able to find a collection on a remote api/v3 endpoint, but when it goes to install the collection version it fails to find it in the cache ...
```(Epdb) print(pid.stdout.decode("utf-8"))
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/tmp/pytest-of-
pulp/pytest-20/test_install_signed_collection0' is not part of the configured
Ansible collections paths
'/var/lib/pulp/.ansible/collections:/usr/share/ansible/collections'. The
installed collection won't be picked up in an Ansible run.
Process install dependency map
Initial connection to galaxy_server: http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api
Found API version 'v1, v2, v3' with Galaxy server pulp_ansible (http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api)
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/?limit=100
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/0.0.3/
Starting collection install process
ERROR! Unexpected Exception, this is probably a bug: The is no known source for testing.k8s_demo_collection:0.0.3
----------------DEBUG-------------------
collection: testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
cache: {<testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>: ('http://localhost:5001/pulp_ansible/galaxy/default/api/v3/plugin/ansible/content/4fae352f-2a64-4c8e-8b30-5fd6e1996408/collections/artifacts/testing-k8s_demo_collection-0.0.3.tar.gz', '360548ba80e3dce478b7915ca89a5613dc80c650a55b96f5491012a8297e12ac', <ansible.galaxy.token.BasicAuthToken object at 0x7fdf1408cac0>)}
testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
testing.k8s_demo_collection:0.0.3 == testing.k8s_demo_collection:0.0.3? False
----------------DEBUG-------------------
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 141, in get_galaxy_artifact_path
url, sha256_hash, token = self._galaxy_collection_cache[collection]
KeyError: <testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 682, in run
return context.CLIARGS['func']()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 104, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1327, in execute_install
self._execute_install_collection(
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1364, in _execute_install_collection
install_collections(
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 748, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1295, in install
b_artifact_path = (
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 149, in get_galaxy_artifact_path
raise_from(
File "<string>", line 3, in raise_from
RuntimeError: The is no known source for testing.k8s_demo_collection:0.0.3
```
The error comes from this block of code: **https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/concrete_artifact_manager.py#L139-L145**
I inspected the key in the dict and the collection variable and they are very similar but one has signing keys and the other does not.
```
1 {
2 'fqcn': 'testing.k8s_demo_collection',
3 'ver': '0.0.3',
4 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
5 'type': 'galaxy',
6 'signatures': frozenset()
7 }
8
9 {
10 'fqcn': 'testing.k8s_demo_collection',
11 'ver': '0.0.3',
12 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
13 'type': 'galaxy',
14 'signatures': frozenset({'-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEbt8wElZIC5uAHro9BaXm2iadnZgFAmRJd/AACgkQBaXm2iad\nnZhSdAf/QIm5AuYbgZ8Jxa/TcavRxoetQtgsspBBiDqvBP67BExN7xoBe/DUtjIA\nn2xbJgxzcwUI+WOYWE+iNjzjYpOBfN8jFlGMdAc21dfN+5NUvH+R0+YmwNf7Ihob\nd0qU3JozJZo+GCd2rMwprnzMp+3LvU9HD+r+hO9ELlMLQeYWVVn/ GBNrjZJ6yGlj\nBCGxvagEMhkp4Gso/ft5Q6VqFSWUrIERb9QZWKTnM7iryNO3ojcjBEvFdk+RuOho\nNN0rjN4Xu+DkbI3nUt49l+XC7yBubu9BBx30KcL1srrVI0nY6Px6LbLMnezg6C5+\nC7qsYP0E+41TQTbb7nxIELXvr/mP5g==\n=3jS8\n-----END PGP SIGNATURE-----\n'})
15 }
```
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Centos 8 stream docker image from pulp ci builds
### Steps to Reproduce
This is a bit nebulous because it's a failing job in the pulp_ansible CI. We do know that rolling back to 2.13.8 fixes it.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80648
|
https://github.com/ansible/ansible/pull/80661
|
71f6e10dae7c862f1e7f02063d4def18f0d44e44
|
d5e2e7a0a8ca9017a091922648430374539f878b
| 2023-04-26T20:29:54Z |
python
| 2023-04-27T20:11:17Z |
lib/ansible/galaxy/dependency_resolution/dataclasses.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency structs."""
# FIXME: add caching all over the place
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import typing as t
from collections import namedtuple
from collections.abc import MutableSequence, MutableMapping
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
Collection = t.TypeVar(
'Collection',
'Candidate', 'Requirement',
'_ComputedReqKindsMixin',
)
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import HAS_PACKAGING, PkgReq
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
def _is_installed_collection_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _MANIFEST_JSON))
def _is_collection_dir(dir_path):
return (
_is_installed_collection_dir(dir_path) or
_is_collection_src_dir(dir_path)
)
def _find_collections_in_subdirs(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
subdir_glob_pattern = os.path.join(
b_dir_path,
# b'*', # namespace is supposed to be top-level per spec
b'*', # collection name
)
for subdir in iglob(subdir_glob_pattern):
if os.path.isfile(os.path.join(subdir, _MANIFEST_JSON)):
yield subdir
elif os.path.isfile(os.path.join(subdir, _GALAXY_YAML)):
yield subdir
def _is_collection_namespace_dir(tested_str):
return any(_find_collections_in_subdirs(tested_str))
def _is_file_path(tested_str):
return os.path.isfile(to_bytes(tested_str, errors='surrogate_or_strict'))
def _is_http_url(tested_str):
return urlparse(tested_str).scheme.lower() in {'http', 'https'}
def _is_git_url(tested_str):
return tested_str.startswith(('git+', 'git@'))
def _is_concrete_artifact_pointer(tested_str):
return any(
predicate(tested_str)
for predicate in (
# NOTE: Maintain the checks to be sorted from light to heavy:
_is_git_url,
_is_http_url,
_is_file_path,
_is_collection_dir,
_is_collection_namespace_dir,
)
)
class _ComputedReqKindsMixin:
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
info_path = self.construct_galaxy_info_path(to_bytes(self.src, errors='surrogate_or_strict'))
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
art_mgr, # type: ConcreteArtifactsManager
): # type: (...) -> Collection
"""Make collection from an unspecified dir type.
This alternative constructor attempts to grab metadata from the
given path if it's a directory. If there's no metadata, it
falls back to guessing the FQCN based on the directory path and
sets the version to "*".
It raises a ValueError immediately if the input is not an
existing directory path.
"""
if not os.path.isdir(dir_path):
raise ValueError(
"The collection directory '{path!s}' doesn't exist".
format(path=to_native(dir_path)),
)
try:
return cls.from_dir_path(dir_path, art_mgr)
except ValueError:
return cls.from_dir_path_implicit(dir_path)
@classmethod
def from_dir_path(cls, dir_path, art_mgr):
"""Make collection from an directory with metadata."""
if dir_path.endswith(to_bytes(os.path.sep)):
dir_path = dir_path.rstrip(to_bytes(os.path.sep))
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
if not _is_collection_dir(b_dir_path):
display.warning(
u"Collection at '{path!s}' does not have a {manifest_json!s} "
u'file, nor has it {galaxy_yml!s}: cannot detect version.'.
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
'`dir_path` argument must be an installed or a source'
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
try:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
except TypeError as err:
# Looks like installed/source dir but isn't: doesn't have valid metadata.
display.warning(
u"Collection at '{path!s}' has a {manifest_json!s} "
u"or {galaxy_yml!s} file but it contains invalid metadata.".
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
"Collection at '{path!s}' has invalid metadata".
format(path=to_text(dir_path, errors='surrogate_or_strict'))
) from err
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
): # type: (...) -> Collection
"""Construct a collection instance based on an arbitrary dir.
This alternative constructor infers the FQCN based on the parent
and current directory names. It also sets the version to "*"
regardless of whether any of known metadata files are present.
"""
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
if dir_path.endswith(to_bytes(os.path.sep)):
dir_path = dir_path.rstrip(to_bytes(os.path.sep))
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input) or AnsibleCollectionRef.is_valid_collection_name(collection_input):
# Arg is a file path or URL to a collection, or just a collection
req['name'] = collection_input
elif ':' in collection_input:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
else:
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
try:
pkg_req = PkgReq(collection_input)
except Exception as e:
# packaging doesn't know what this is, let it fly, better errors happen in from_requirement_dict
req['name'] = collection_input
else:
req['name'] = pkg_req.name
if pkg_req.specifier:
req['version'] = to_text(pkg_req.specifier)
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@classmethod
def from_requirement_dict(cls, collection_req, art_mgr, validate_signature_options=True):
req_name = collection_req.get('name', None)
req_version = collection_req.get('version', '*')
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if validate_signature_options and art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
_ALLOW_CONCRETE_POINTER_IN_SOURCE
and req_source is not None
and _is_concrete_artifact_pointer(req_source)
):
src_path = req_source
elif (
req_name is not None
and AnsibleCollectionRef.is_valid_collection_name(req_name)
):
req_type = 'galaxy'
elif (
req_name is not None
and _is_concrete_artifact_pointer(req_name)
):
src_path, req_name = req_name, None
else:
dir_tip_tmpl = ( # NOTE: leading LFs are for concat
'\n\nTip: Make sure you are pointing to the right '
'subdirectory — `{src!s}` looks like a directory '
'but it is neither a collection, nor a namespace '
'dir.'
)
if req_source is not None and os.path.isdir(req_source):
tip = dir_tip_tmpl.format(src=req_source)
elif req_name is not None and os.path.isdir(req_name):
tip = dir_tip_tmpl.format(src=req_name)
elif req_name:
tip = '\n\nCould not find {0}.'.format(req_name)
else:
tip = ''
raise AnsibleError( # NOTE: I'd prefer a ValueError instead
'Neither the collection requirement entry key '
"'name', nor 'source' point to a concrete "
"resolvable collection artifact. Also 'name' is "
'not an FQCN. A valid collection name must be in '
'the format <namespace>.<collection>. Please make '
'sure that the namespace and the collection name '
'contain characters from [a-zA-Z0-9_] only.'
'{extra_tip!s}'.format(extra_tip=tip),
)
if req_type is None:
if _is_git_url(src_path):
req_type = 'git'
req_source = src_path
elif _is_http_url(src_path):
req_type = 'url'
req_source = src_path
elif _is_file_path(src_path):
req_type = 'file'
req_source = src_path
elif _is_collection_dir(src_path):
if _is_installed_collection_dir(src_path) and _is_collection_src_dir(src_path):
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
raise AnsibleError(
u"Collection requirement at '{path!s}' has both a {manifest_json!s} "
u"file and a {galaxy_yml!s}.\nThe requirement must either be an installed "
u"collection directory or a source collection directory, not both.".
format(
path=to_text(src_path, errors='surrogate_or_strict'),
manifest_json=to_text(_MANIFEST_JSON),
galaxy_yml=to_text(_GALAXY_YAML),
)
)
req_type = 'dir'
req_source = src_path
elif _is_collection_namespace_dir(src_path):
req_name = None # No name for a virtual req or "namespace."?
req_type = 'subdirs'
req_source = src_path
else:
raise AnsibleError( # NOTE: this is never supposed to be hit
'Failed to automatically detect the collection '
'requirement type.',
)
if req_type not in {'file', 'galaxy', 'git', 'url', 'dir', 'subdirs'}:
raise AnsibleError(
"The collection requirement entry key 'type' must be "
'one of file, galaxy, git, dir, subdirs, or url.'
)
if req_name is None and req_type == 'galaxy':
raise AnsibleError(
'Collections requirement entry should contain '
"the key 'name' if it's requested from a Galaxy-like "
'index server.',
)
if req_type != 'galaxy' and req_source is None:
req_source, req_name = req_name, None
if (
req_type == 'galaxy' and
isinstance(req_source, GalaxyAPI) and
not _is_http_url(req_source.api_server)
):
raise AnsibleError(
"Collections requirement 'source' entry should contain "
'a valid Galaxy API URL but it does not: {not_url!s} '
'is not an HTTP URL.'.
format(not_url=req_source.api_server),
)
if req_type == 'dir' and req_source.endswith(os.path.sep):
req_source = req_source.rstrip(os.path.sep)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
if req_type not in {'galaxy', 'subdirs'} and req_version == '*':
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
return (
'<{self!s} of type {coll_type!r} from {src!s}>'.
format(self=self, coll_type=self.type, src=self.src or 'Galaxy')
)
def __str__(self):
return to_native(self.__unicode__())
def __unicode__(self):
if self.fqcn is None:
return (
u'"virtual collection Git repo"' if self.is_scm
else u'"virtual collection namespace"'
)
return (
u'{fqcn!s}:{ver!s}'.
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_collection_path):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(b_collection_path, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_metadata_dir = to_bytes(os.path.sep).join(b_path_parts)
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@property
def namespace(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a namespace')
return self._get_separate_ns_n_name()[0]
@property
def name(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a name')
return self._get_separate_ns_n_name()[-1]
@property
def canonical_package_id(self):
if not self.is_virtual:
return to_native(self.fqcn)
return (
'<virtual namespace from {src!s} of type {src_type!s}>'.
format(src=to_native(self.src), src_type=to_native(self.type))
)
@property
def is_virtual(self):
return self.is_scm or self.is_subdirs
@property
def is_file(self):
return self.type == 'file'
@property
def is_dir(self):
return self.type == 'dir'
@property
def namespace_collection_paths(self):
return [
to_native(path)
for path in _find_collections_in_subdirs(self.src)
]
@property
def is_subdirs(self):
return self.type == 'subdirs'
@property
def is_url(self):
return self.type == 'url'
@property
def is_scm(self):
return self.type == 'git'
@property
def is_concrete_artifact(self):
return self.type in {'git', 'url', 'file', 'dir', 'subdirs'}
@property
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources')) # type: ignore[name-match]
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures')) # type: ignore[name-match]
class Requirement(
_ComputedReqKindsMixin,
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()
def with_signatures_repopulated(self): # type: (Candidate) -> Candidate
"""Populate a new Candidate instance with Galaxy signatures.
:raises AnsibleAssertionError: If the supplied candidate is not sourced from a Galaxy-like index.
"""
if self.type != 'galaxy':
raise AnsibleAssertionError(f"Invalid collection type for {self!r}: unable to get signatures from a galaxy server.")
signatures = self.src.get_collection_signatures(self.namespace, self.name, self.ver)
return self.__class__(self.fqcn, self.ver, self.src, self.type, frozenset([*self.signatures, *signatures]))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,648 |
galaxy collection caching mechanism fails to find available signed collection
|
### Summary
galaxy cli is able to find a collection on a remote api/v3 endpoint, but when it goes to install the collection version it fails to find it in the cache ...
```(Epdb) print(pid.stdout.decode("utf-8"))
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
Starting galaxy collection install process
[WARNING]: The specified collections path '/tmp/pytest-of-
pulp/pytest-20/test_install_signed_collection0' is not part of the configured
Ansible collections paths
'/var/lib/pulp/.ansible/collections:/usr/share/ansible/collections'. The
installed collection won't be picked up in an Ansible run.
Process install dependency map
Initial connection to galaxy_server: http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api
Found API version 'v1, v2, v3' with Galaxy server pulp_ansible (http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api)
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/?limit=100
Calling Galaxy at http://localhost:5001/pulp_ansible/galaxy/4fae352f-2a64-4c8e-8b30-5fd6e1996408/api/v3/collections/testing/k8s_demo_collection/versions/0.0.3/
Starting collection install process
ERROR! Unexpected Exception, this is probably a bug: The is no known source for testing.k8s_demo_collection:0.0.3
----------------DEBUG-------------------
collection: testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
cache: {<testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>: ('http://localhost:5001/pulp_ansible/galaxy/default/api/v3/plugin/ansible/content/4fae352f-2a64-4c8e-8b30-5fd6e1996408/collections/artifacts/testing-k8s_demo_collection-0.0.3.tar.gz', '360548ba80e3dce478b7915ca89a5613dc80c650a55b96f5491012a8297e12ac', <ansible.galaxy.token.BasicAuthToken object at 0x7fdf1408cac0>)}
testing.k8s_demo_collection:0.0.3 <class 'ansible.galaxy.dependency_resolution.dataclasses.Candidate'>
testing.k8s_demo_collection:0.0.3 == testing.k8s_demo_collection:0.0.3? False
----------------DEBUG-------------------
the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 141, in get_galaxy_artifact_path
url, sha256_hash, token = self._galaxy_collection_cache[collection]
KeyError: <testing.k8s_demo_collection:0.0.3 of type 'galaxy' from pulp_ansible>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 601, in cli_executor
exit_code = cli.run()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 682, in run
return context.CLIARGS['func']()
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 104, in method_wrapper
return wrapped_method(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1327, in execute_install
self._execute_install_collection(
File "/usr/local/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 1364, in _execute_install_collection
install_collections(
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 748, in install_collections
install(concrete_coll_pin, output_path, artifacts_manager)
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/__init__.py", line 1295, in install
b_artifact_path = (
File "/usr/local/lib/python3.8/site-packages/ansible/galaxy/collection/concrete_artifact_manager.py", line 149, in get_galaxy_artifact_path
raise_from(
File "<string>", line 3, in raise_from
RuntimeError: The is no known source for testing.k8s_demo_collection:0.0.3
```
The error comes from this block of code: **https://github.com/ansible/ansible/blob/devel/lib/ansible/galaxy/collection/concrete_artifact_manager.py#L139-L145**
I inspected the key in the dict and the collection variable and they are very similar but one has signing keys and the other does not.
```
1 {
2 'fqcn': 'testing.k8s_demo_collection',
3 'ver': '0.0.3',
4 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
5 'type': 'galaxy',
6 'signatures': frozenset()
7 }
8
9 {
10 'fqcn': 'testing.k8s_demo_collection',
11 'ver': '0.0.3',
12 'src': <pulp_ansible "pulp_ansible" @ http://localhost:5001/pulp_ansible/galaxy/6ec55cca-6342-4cdb-82bf-aa6c2f800843/api with priority 1>,
13 'type': 'galaxy',
14 'signatures': frozenset({'-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEbt8wElZIC5uAHro9BaXm2iadnZgFAmRJd/AACgkQBaXm2iad\nnZhSdAf/QIm5AuYbgZ8Jxa/TcavRxoetQtgsspBBiDqvBP67BExN7xoBe/DUtjIA\nn2xbJgxzcwUI+WOYWE+iNjzjYpOBfN8jFlGMdAc21dfN+5NUvH+R0+YmwNf7Ihob\nd0qU3JozJZo+GCd2rMwprnzMp+3LvU9HD+r+hO9ELlMLQeYWVVn/ GBNrjZJ6yGlj\nBCGxvagEMhkp4Gso/ft5Q6VqFSWUrIERb9QZWKTnM7iryNO3ojcjBEvFdk+RuOho\nNN0rjN4Xu+DkbI3nUt49l+XC7yBubu9BBx30KcL1srrVI0nY6Px6LbLMnezg6C5+\nC7qsYP0E+41TQTbb7nxIELXvr/mP5g==\n=3jS8\n-----END PGP SIGNATURE-----\n'})
15 }
```
### Issue Type
Bug Report
### Component Name
galaxy
### Ansible Version
```console
ansible-galaxy [core 2.13.9]
config file = /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg
configured module search path = ['/var/lib/pulp/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /var/lib/pulp/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.13 (default, Jun 24 2022, 15:27:57) [GCC 8.5.0 20210514 (Red Hat 8.5.0-13)]
jinja version = 3.1.2
libyaml = True
Using /tmp/pytest-of-pulp/pytest-20/test_install_signed_collection0/ansible.cfg as config file
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Centos 8 stream docker image from pulp ci builds
### Steps to Reproduce
This is a bit nebulous because it's a failing job in the pulp_ansible CI. We do know that rolling back to 2.13.8 fixes it.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80648
|
https://github.com/ansible/ansible/pull/80661
|
71f6e10dae7c862f1e7f02063d4def18f0d44e44
|
d5e2e7a0a8ca9017a091922648430374539f878b
| 2023-04-26T20:29:54Z |
python
| 2023-04-27T20:11:17Z |
lib/ansible/galaxy/dependency_resolution/providers.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Requirement provider interfaces."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import functools
import typing as t
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.gpg import get_signature_from_source
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate,
Requirement,
)
from ansible.galaxy.dependency_resolution.versioning import (
is_pre_release,
meets_requirements,
)
from ansible.module_utils.six import string_types
from ansible.utils.version import SemanticVersion, LooseVersion
from collections.abc import Set
try:
from resolvelib import AbstractProvider
from resolvelib import __version__ as resolvelib_version
except ImportError:
class AbstractProvider: # type: ignore[no-redef]
pass
resolvelib_version = '0.0.0'
# TODO: add python requirements to ansible-test's ansible-core distribution info and remove the hardcoded lowerbound/upperbound fallback
RESOLVELIB_LOWERBOUND = SemanticVersion("0.5.3")
RESOLVELIB_UPPERBOUND = SemanticVersion("1.1.0")
RESOLVELIB_VERSION = SemanticVersion.from_loose_version(LooseVersion(resolvelib_version))
class PinnedCandidateRequests(Set):
"""Custom set class to store Candidate objects. Excludes the 'signatures' attribute when determining if a Candidate instance is in the set."""
CANDIDATE_ATTRS = ('fqcn', 'ver', 'src', 'type')
def __init__(self, candidates):
self._candidates = set(candidates)
def __iter__(self):
return iter(self._candidates)
def __contains__(self, value):
if not isinstance(value, Candidate):
raise ValueError(f"Expected a Candidate object but got {value!r}")
for candidate in self._candidates:
# Compare Candidate attributes excluding "signatures" since it is
# unrelated to whether or not a matching Candidate is user-requested.
# Candidate objects in the set are not expected to have signatures.
for attr in PinnedCandidateRequests.CANDIDATE_ATTRS:
if getattr(value, attr) != getattr(candidate, attr):
break
else:
return True
return False
def __len__(self):
return len(self._candidates)
class CollectionDependencyProviderBase(AbstractProvider):
"""Delegate providing a requirement interface for the resolver."""
def __init__(
self, # type: CollectionDependencyProviderBase
apis, # type: MultiGalaxyAPIProxy
concrete_artifacts_manager=None, # type: ConcreteArtifactsManager
user_requirements=None, # type: t.Iterable[Requirement]
preferred_candidates=None, # type: t.Iterable[Candidate]
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
include_signatures=True, # type: bool
): # type: (...) -> None
r"""Initialize helper attributes.
:param api: An instance of the multiple Galaxy APIs wrapper.
:param concrete_artifacts_manager: An instance of the caching \
concrete artifacts manager.
:param with_deps: A flag specifying whether the resolver \
should attempt to pull-in the deps of the \
requested requirements. On by default.
:param with_pre_releases: A flag specifying whether the \
resolver should skip pre-releases. \
Off by default.
:param upgrade: A flag specifying whether the resolver should \
skip matching versions that are not upgrades. \
Off by default.
:param include_signatures: A flag to determine whether to retrieve \
signatures from the Galaxy APIs and \
include signatures in matching Candidates. \
On by default.
"""
self._api_proxy = apis
self._make_req_from_dict = functools.partial(
Requirement.from_requirement_dict,
art_mgr=concrete_artifacts_manager,
)
self._pinned_candidate_requests = PinnedCandidateRequests(
# NOTE: User-provided signatures are supplemental, so signatures
# NOTE: are not used to determine if a candidate is user-requested
Candidate(req.fqcn, req.ver, req.src, req.type, None)
for req in (user_requirements or ())
if req.is_concrete_artifact or (
req.ver != '*' and
not req.ver.startswith(('<', '>', '!='))
)
)
self._preferred_candidates = set(preferred_candidates or ())
self._with_deps = with_deps
self._with_pre_releases = with_pre_releases
self._upgrade = upgrade
self._include_signatures = include_signatures
def _is_user_requested(self, candidate): # type: (Candidate) -> bool
"""Check if the candidate is requested by the user."""
if candidate in self._pinned_candidate_requests:
return True
if candidate.is_online_index_pointer and candidate.src is not None:
# NOTE: Candidate is a namedtuple, it has a source server set
# NOTE: to a specific GalaxyAPI instance or `None`. When the
# NOTE: user runs
# NOTE:
# NOTE: $ ansible-galaxy collection install ns.coll
# NOTE:
# NOTE: then it's saved in `self._pinned_candidate_requests`
# NOTE: as `('ns.coll', '*', None, 'galaxy')` but then
# NOTE: `self.find_matches()` calls `self.is_satisfied_by()`
# NOTE: with Candidate instances bound to each specific
# NOTE: server available, those look like
# NOTE: `('ns.coll', '*', GalaxyAPI(...), 'galaxy')` and
# NOTE: wouldn't match the user requests saved in
# NOTE: `self._pinned_candidate_requests`. This is why we
# NOTE: normalize the collection to have `src=None` and try
# NOTE: again.
# NOTE:
# NOTE: When the user request comes from `requirements.yml`
# NOTE: with the `source:` set, it'll match the first check
# NOTE: but it still can have entries with `src=None` so this
# NOTE: normalized check is still necessary.
# NOTE:
# NOTE: User-provided signatures are supplemental, so signatures
# NOTE: are not used to determine if a candidate is user-requested
return Candidate(
candidate.fqcn, candidate.ver, None, candidate.type, None
) in self._pinned_candidate_requests
return False
def identify(self, requirement_or_candidate):
# type: (t.Union[Candidate, Requirement]) -> str
"""Given requirement or candidate, return an identifier for it.
This is used to identify a requirement or candidate, e.g.
whether two requirements should have their specifier parts
(version ranges or pins) merged, whether two candidates would
conflict with each other (because they have same name but
different versions).
"""
return requirement_or_candidate.canonical_package_id
def get_preference(self, *args, **kwargs):
# type: (t.Any, t.Any) -> t.Union[float, int]
"""Return sort key function return value for given requirement.
This result should be based on preference that is defined as
"I think this requirement should be resolved first".
The lower the return value is, the more preferred this
group of arguments is.
resolvelib >=0.5.3, <0.7.0
:param resolution: Currently pinned candidate, or ``None``.
:param candidates: A list of possible candidates.
:param information: A list of requirement information.
Each ``information`` instance is a named tuple with two entries:
* ``requirement`` specifies a requirement contributing to
the current candidate list
* ``parent`` specifies the candidate that provides
(dependend on) the requirement, or `None`
to indicate a root requirement.
resolvelib >=0.7.0, < 0.8.0
:param identifier: The value returned by ``identify()``.
:param resolutions: Mapping of identifier, candidate pairs.
:param candidates: Possible candidates for the identifer.
Mapping of identifier, list of candidate pairs.
:param information: Requirement information of each package.
Mapping of identifier, list of named tuple pairs.
The named tuples have the entries ``requirement`` and ``parent``.
resolvelib >=0.8.0, <= 1.0.1
:param identifier: The value returned by ``identify()``.
:param resolutions: Mapping of identifier, candidate pairs.
:param candidates: Possible candidates for the identifer.
Mapping of identifier, list of candidate pairs.
:param information: Requirement information of each package.
Mapping of identifier, list of named tuple pairs.
The named tuples have the entries ``requirement`` and ``parent``.
:param backtrack_causes: Sequence of requirement information that were
the requirements that caused the resolver to most recently backtrack.
The preference could depend on a various of issues, including
(not necessarily in this order):
* Is this package pinned in the current resolution result?
* How relaxed is the requirement? Stricter ones should
probably be worked on first? (I don't know, actually.)
* How many possibilities are there to satisfy this
requirement? Those with few left should likely be worked on
first, I guess?
* Are there any known conflicts for this requirement?
We should probably work on those with the most
known conflicts.
A sortable value should be returned (this will be used as the
`key` parameter of the built-in sorting function). The smaller
the value is, the more preferred this requirement is (i.e. the
sorting function is called with ``reverse=False``).
"""
raise NotImplementedError
def _get_preference(self, candidates):
# type: (list[Candidate]) -> t.Union[float, int]
if any(
candidate in self._preferred_candidates
for candidate in candidates
):
# NOTE: Prefer pre-installed candidates over newer versions
# NOTE: available from Galaxy or other sources.
return float('-inf')
return len(candidates)
def find_matches(self, *args, **kwargs):
# type: (t.Any, t.Any) -> list[Candidate]
r"""Find all possible candidates satisfying given requirements.
This tries to get candidates based on the requirements' types.
For concrete requirements (SCM, dir, namespace dir, local or
remote archives), the one-and-only match is returned
For a "named" requirement, Galaxy-compatible APIs are consulted
to find concrete candidates for this requirement. Of theres a
pre-installed candidate, it's prepended in front of others.
resolvelib >=0.5.3, <0.6.0
:param requirements: A collection of requirements which all of \
the returned candidates must match. \
All requirements are guaranteed to have \
the same identifier. \
The collection is never empty.
resolvelib >=0.6.0
:param identifier: The value returned by ``identify()``.
:param requirements: The requirements all returned candidates must satisfy.
Mapping of identifier, iterator of requirement pairs.
:param incompatibilities: Incompatible versions that must be excluded
from the returned list.
:returns: An iterable that orders candidates by preference, \
e.g. the most preferred candidate comes first.
"""
raise NotImplementedError
def _find_matches(self, requirements):
# type: (list[Requirement]) -> list[Candidate]
# FIXME: The first requirement may be a Git repo followed by
# FIXME: its cloned tmp dir. Using only the first one creates
# FIXME: loops that prevent any further dependency exploration.
# FIXME: We need to figure out how to prevent this.
first_req = requirements[0]
fqcn = first_req.fqcn
# The fqcn is guaranteed to be the same
version_req = "A SemVer-compliant version or '*' is required. See https://semver.org to learn how to compose it correctly. "
version_req += "This is an issue with the collection."
# If we're upgrading collections, we can't calculate preinstalled_candidates until the latest matches are found.
# Otherwise, we can potentially avoid a Galaxy API call by doing this first.
preinstalled_candidates = set()
if not self._upgrade and first_req.type == 'galaxy':
preinstalled_candidates = {
candidate for candidate in self._preferred_candidates
if candidate.fqcn == fqcn and
all(self.is_satisfied_by(requirement, candidate) for requirement in requirements)
}
try:
coll_versions = [] if preinstalled_candidates else self._api_proxy.get_collection_versions(first_req) # type: t.Iterable[t.Tuple[str, GalaxyAPI]]
except TypeError as exc:
if first_req.is_concrete_artifact:
# Non hashable versions will cause a TypeError
raise ValueError(
f"Invalid version found for the collection '{first_req}'. {version_req}"
) from exc
# Unexpected error from a Galaxy server
raise
if first_req.is_concrete_artifact:
# FIXME: do we assume that all the following artifacts are also concrete?
# FIXME: does using fqcn==None cause us problems here?
# Ensure the version found in the concrete artifact is SemVer-compliant
for version, req_src in coll_versions:
version_err = f"Invalid version found for the collection '{first_req}': {version} ({type(version)}). {version_req}"
# NOTE: The known cases causing the version to be a non-string object come from
# NOTE: the differences in how the YAML parser normalizes ambiguous values and
# NOTE: how the end-users sometimes expect them to be parsed. Unless the users
# NOTE: explicitly use the double quotes of one of the multiline string syntaxes
# NOTE: in the collection metadata file, PyYAML will parse a value containing
# NOTE: two dot-separated integers as `float`, a single integer as `int`, and 3+
# NOTE: integers as a `str`. In some cases, they may also use an empty value
# NOTE: which is normalized as `null` and turned into `None` in the Python-land.
# NOTE: Another known mistake is setting a minor part of the SemVer notation
# NOTE: skipping the "patch" bit like "1.0" which is assumed non-compliant even
# NOTE: after the conversion to string.
if not isinstance(version, string_types):
raise ValueError(version_err)
elif version != '*':
try:
SemanticVersion(version)
except ValueError as ex:
raise ValueError(version_err) from ex
return [
Candidate(fqcn, version, _none_src_server, first_req.type, None)
for version, _none_src_server in coll_versions
]
latest_matches = []
signatures = []
extra_signature_sources = [] # type: list[str]
for version, src_server in coll_versions:
tmp_candidate = Candidate(fqcn, version, src_server, 'galaxy', None)
unsatisfied = False
for requirement in requirements:
unsatisfied |= not self.is_satisfied_by(requirement, tmp_candidate)
# FIXME
# unsatisfied |= not self.is_satisfied_by(requirement, tmp_candidate) or not (
# requirement.src is None or # if this is true for some candidates but not all it will break key param - Nonetype can't be compared to str
# or requirement.src == candidate.src
# )
if unsatisfied:
break
if not self._include_signatures:
continue
extra_signature_sources.extend(requirement.signature_sources or [])
if not unsatisfied:
if self._include_signatures:
for extra_source in extra_signature_sources:
signatures.append(get_signature_from_source(extra_source))
latest_matches.append(
Candidate(fqcn, version, src_server, 'galaxy', frozenset(signatures))
)
latest_matches.sort(
key=lambda candidate: (
SemanticVersion(candidate.ver), candidate.src,
),
reverse=True, # prefer newer versions over older ones
)
if not preinstalled_candidates:
preinstalled_candidates = {
candidate for candidate in self._preferred_candidates
if candidate.fqcn == fqcn and
(
# check if an upgrade is necessary
all(self.is_satisfied_by(requirement, candidate) for requirement in requirements) and
(
not self._upgrade or
# check if an upgrade is preferred
all(SemanticVersion(latest.ver) <= SemanticVersion(candidate.ver) for latest in latest_matches)
)
)
}
return list(preinstalled_candidates) + latest_matches
def is_satisfied_by(self, requirement, candidate):
# type: (Requirement, Candidate) -> bool
r"""Whether the given requirement is satisfiable by a candidate.
:param requirement: A requirement that produced the `candidate`.
:param candidate: A pinned candidate supposedly matchine the \
`requirement` specifier. It is guaranteed to \
have been generated from the `requirement`.
:returns: Indication whether the `candidate` is a viable \
solution to the `requirement`.
"""
# NOTE: Only allow pre-release candidates if we want pre-releases
# NOTE: or the req ver was an exact match with the pre-release
# NOTE: version. Another case where we'd want to allow
# NOTE: pre-releases is when there are several user requirements
# NOTE: and one of them is a pre-release that also matches a
# NOTE: transitive dependency of another requirement.
allow_pre_release = self._with_pre_releases or not (
requirement.ver == '*' or
requirement.ver.startswith('<') or
requirement.ver.startswith('>') or
requirement.ver.startswith('!=')
) or self._is_user_requested(candidate)
if is_pre_release(candidate.ver) and not allow_pre_release:
return False
# NOTE: This is a set of Pipenv-inspired optimizations. Ref:
# https://github.com/sarugaku/passa/blob/2ac00f1/src/passa/models/providers.py#L58-L74
if (
requirement.is_virtual or
candidate.is_virtual or
requirement.ver == '*'
):
return True
return meets_requirements(
version=candidate.ver,
requirements=requirement.ver,
)
def get_dependencies(self, candidate):
# type: (Candidate) -> list[Candidate]
r"""Get direct dependencies of a candidate.
:returns: A collection of requirements that `candidate` \
specifies as its dependencies.
"""
# FIXME: If there's several galaxy servers set, there may be a
# FIXME: situation when the metadata of the same collection
# FIXME: differs. So how do we resolve this case? Priority?
# FIXME: Taking into account a pinned hash? Exploding on
# FIXME: any differences?
# NOTE: The underlying implmentation currently uses first found
req_map = self._api_proxy.get_collection_dependencies(candidate)
# NOTE: This guard expression MUST perform an early exit only
# NOTE: after the `get_collection_dependencies()` call because
# NOTE: internally it polulates the artifact URL of the candidate,
# NOTE: its SHA hash and the Galaxy API token. These are still
# NOTE: necessary with `--no-deps` because even with the disabled
# NOTE: dependency resolution the outer layer will still need to
# NOTE: know how to download and validate the artifact.
#
# NOTE: Virtual candidates should always return dependencies
# NOTE: because they are ephemeral and non-installable.
if not self._with_deps and not candidate.is_virtual:
return []
return [
self._make_req_from_dict({'name': dep_name, 'version': dep_req})
for dep_name, dep_req in req_map.items()
]
# Classes to handle resolvelib API changes between minor versions for 0.X
class CollectionDependencyProvider050(CollectionDependencyProviderBase):
def find_matches(self, requirements): # type: ignore[override]
# type: (list[Requirement]) -> list[Candidate]
return self._find_matches(requirements)
def get_preference(self, resolution, candidates, information): # type: ignore[override]
# type: (t.Optional[Candidate], list[Candidate], list[t.NamedTuple]) -> t.Union[float, int]
return self._get_preference(candidates)
class CollectionDependencyProvider060(CollectionDependencyProviderBase):
def find_matches(self, identifier, requirements, incompatibilities): # type: ignore[override]
# type: (str, t.Mapping[str, t.Iterator[Requirement]], t.Mapping[str, t.Iterator[Requirement]]) -> list[Candidate]
return [
match for match in self._find_matches(list(requirements[identifier]))
if not any(match.ver == incompat.ver for incompat in incompatibilities[identifier])
]
def get_preference(self, resolution, candidates, information): # type: ignore[override]
# type: (t.Optional[Candidate], list[Candidate], list[t.NamedTuple]) -> t.Union[float, int]
return self._get_preference(candidates)
class CollectionDependencyProvider070(CollectionDependencyProvider060):
def get_preference(self, identifier, resolutions, candidates, information): # type: ignore[override]
# type: (str, t.Mapping[str, Candidate], t.Mapping[str, t.Iterator[Candidate]], t.Iterator[t.NamedTuple]) -> t.Union[float, int]
return self._get_preference(list(candidates[identifier]))
class CollectionDependencyProvider080(CollectionDependencyProvider060):
def get_preference(self, identifier, resolutions, candidates, information, backtrack_causes): # type: ignore[override]
# type: (str, t.Mapping[str, Candidate], t.Mapping[str, t.Iterator[Candidate]], t.Iterator[t.NamedTuple], t.Sequence) -> t.Union[float, int]
return self._get_preference(list(candidates[identifier]))
def _get_provider(): # type () -> CollectionDependencyProviderBase
if RESOLVELIB_VERSION >= SemanticVersion("0.8.0"):
return CollectionDependencyProvider080
if RESOLVELIB_VERSION >= SemanticVersion("0.7.0"):
return CollectionDependencyProvider070
if RESOLVELIB_VERSION >= SemanticVersion("0.6.0"):
return CollectionDependencyProvider060
return CollectionDependencyProvider050
CollectionDependencyProvider = _get_provider()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,434 |
OpenBSD service_flags do not get set
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "service" module somehow utterly fails to handle the "arguments" parameter on OpenBSD targets, never detecting any change, and so never doing anything.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
service module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 99
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DISPLAY_ARGS_TO_STDOUT(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
SHOW_CUSTOM_STATS(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
Ansible controller: CentOS 7.7.1908
Target: OpenBSD 6.6-STABLE
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
[root@ansible playbooks]# cat openbsd-syslog.yml
---
- hosts: openbsd
remote_user: root
tasks:
- name: edit
lineinfile:
path: /etc/syslog.conf
line: '*,mark.* @{{ log_host }}:55514'
- name: adjust syslogd settings
service:
name: syslogd
state: started
arguments: -Zhr
- name: adjust syslogd settings and restart
service:
name: syslogd
state: restarted
# vim:set ts=2 sw=2 nu cursorcolumn:
```
##### EXPECTED RESULTS
Should change /etc/rc.conf.local : syslogd_flags=-Zhr, by invoking "rcctl set syslogd flags -Zhr"
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[root@ansible playbooks]# ansible-playbook -vvvv openbsd-syslog.yml -l rancid.merlinoffice.local
ansible-playbook 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: openbsd-syslog.yml *********************************************************************************************************************************************************************
Positional arguments: openbsd-syslog.yml
subset: rancid.merlinoffice.local
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 99
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in openbsd-syslog.yml
PLAY [openbsd] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts gather_subset=['all'], gather_timeout=10] **********************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:2
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_product_serial": "VMware-42 1d 80 7b 59 37 e8 e3-7e c6 4c d0 83 10 25 94", "ansible_product_version": "None", "ansible_fips": false, "ansible_service_mgr": "bsdinit", "ansible_user_id": "root", "ansible_selinux_python_present": false, "ansible_memtotal_mb": 2031, "gather_subset": ["all"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEDCENHjwCqvucund+mo7WlTqg1JeF9rGwkY+R5My/6aF2FVi20+zSo81jA+HQRcdEyfYCAry7zhL1BNI5qDm/Q=", "ansible_kernel_version": "GENERIC.MP#3", "ansible_distribution_version": "6.6", "ansible_domain": "merlinoffice.local", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIB8EEog+UQDB9WhZybSi3C1OaL2CXkjUS/M8rR+ZV0pZ", "ansible_processor_cores": "2", "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["10.1.1.5", "10.1.1.6"], "domain": "merlin.mb.ca", "search": ["merlin.mb.ca", "merlin.ca", "merlinoffice.local"]}, "ansible_effective_group_id": 0, "ansible_processor": ["Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz", "Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz"], "ansible_date_time": {"weekday_number": "1", "iso8601_basic_short": "20200113T140923", "tz": "CST", "weeknumber": "02", "hour": "14", "year": "2020", "minute": "09", "tz_offset": "-0600", "month": "01", "epoch": "1578946163", "iso8601_micro": "2020-01-13T20:09:23.896760Z", "weekday": "Monday", "time": "14:09:23", "date": "2020-01-13", "iso8601": "2020-01-13T20:09:23Z", "day": "13", "iso8601_basic": "20200113T140923896480", "second": "23"}, "ansible_userspace_bits": "64", "ansible_architecture": "amd64", "ansible_real_user_id": 0, "ansible_default_ipv4": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "network": "10.1.1.0", "media": "Ethernet", "mtu": "1500", "broadcast": "10.1.1.255", "interface": "vic0", "netmask": "255.255.255.0", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "address": "10.1.1.23", "device": "vic0", "media_select": "autoselect", "type": "ether", "gateway": "10.1.1.1"}, "ansible_default_ipv6": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "media": "Ethernet", "mtu": "1500", "interface": "vic0", "prefix": "64", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "address": "fe80::250:56ff:fe9d:5be2%vic0", "device": "vic0", "scope": "0x1", "media_select": "autoselect", "type": "ether", "gateway": "2620:b0:0:3::1"}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_apparmor": {"status": "disabled"}, "ansible_effective_user_id": 0, "ansible_distribution_release": "release", "ansible_mounts": [{"block_used": 2147744, "size_total": 21132113920, "block_total": 10318415, "mount": "/", "block_available": 8170671, "size_available": 16733534208, "fstype": "ffs", "inode_total": 2650366, "options": "rw,wxallowed,sync", "device": "cd72b15d8f2e1e30.a", "inode_used": 303019, "block_size": 16384, "inode_available": 2347347}, {"block_used": 8590976, "size_total": 21132113920, "block_total": 41273660, "mount": "/var/www/rancid", "block_available": 32682684, "size_available": 16733534208, "fstype": "nfs", "inode_total": 2650366, "options": "auto,ro,soft,intr", "device": "localhost:/var/rancid", "inode_used": 303019, "block_size": 8192, "inode_available": 2347347}], "ansible_selinux": {"status": "Missing selinux Python library"}, "ansible_os_family": "OpenBSD", "ansible_vic0": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "media": "Ethernet", "mtu": "1500", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "ipv4": [{"broadcast": "10.1.1.255", "netmask": "255.255.255.0", "network": "10.1.1.0", "address": "10.1.1.23"}], "ipv6": [{"scope": "0x1", "prefix": "64", "address": "fe80::250:56ff:fe9d:5be2%vic0"}, {"prefix": "64", "address": "2620:b0:0:3::23"}], "device": "vic0", "media_select": "autoselect", "type": "ether"}, "ansible_product_uuid": "421d807b-5937-e8e3-7ec6-4cd083102594", "ansible_fqdn": "rancid.merlinoffice.local", "ansible_product_name": "VMware Virtual Platform", "ansible_pkg_mgr": "openbsd_pkg", "ansible_memfree_mb": 0, "ansible_devices": ["sd0:cd72b15d8f2e1e30"], "ansible_user_uid": 0, "ansible_fibre_channel_wwn": [], "ansible_distribution": "OpenBSD", "ansible_user_dir": "/root", "ansible_env": {"SHELL": "/bin/ksh", "SSH_CLIENT": "2620:b0:0:3::41 40830 22", "PWD": "/root", "LOGNAME": "root", "USER": "root", "PATH": "/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin", "MAIL": "/var/mail/root", "SSH_CONNECTION": "2620:b0:0:3::41 40830 2620:b0:0:3::23 22", "HOME": "/root", "_": "/bin/sh"}, "ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBAMLzajIzlt2GEKtoldLCZKFz65LUgTb3auiqJMewPTu10gwYrAUkt3XUX1lqXxw9114eMe0Adema30fUL6cOtJ8SWVEMaBaL9Sj7dogXRKNllGP8neebFYzOwE6Sx6GeROfJ+YxAmLSrot1mi3QzxUuKzggi3W8+1HefV+aIQB/DAAAAFQDJfqp7krfqsLxJMwZdjDKkndNIdwAAAIEApdkWCeKvKWqWIRHxA5t+dfzDYYba7QQ/rwu4WKS484PBxClUHOQtEVlkr9Y7VEuFWieDNJeqGDMR72h6j4biQgQsguA6bifhMk4+7A8FyeN/0jjH1m++tFyddwaFecXKnFQof6S/oazAbZMGsazCj3QZHKvgPrLxlQgIbXZ9sIUAAACBAK5Kf1jqJUYCwc5GV6LbPJC6bu1hFM3mRbvsEbZdYhCJFGVaQCe4SdM5EUEb9KogTctQlnGV6ZFuyOZ7s7zgnGAYrr62vi2tYpcbdy5EPZOlLdz0jFT+03P/CqCC8QKcesC5Te1i8IwbjXt2XpKRCm8ToCc469eG8Lo3UiDKXIJT", "module_setup": true, "ansible_iscsi_iqn": "", "ansible_hostname": "rancid", "ansible_processor_count": "2", "ansible_lo0": {"macaddress": "unknown", "mtu": "32768", "flags": ["UP", "LOOPBACK", "RUNNING", "MULTICAST"], "ipv4": [{"broadcast": "127.255.255.255", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}], "ipv6": [{"prefix": "128", "address": "::1"}, {"scope": "0x3", "prefix": "64", "address": "fe80::1%lo0"}], "device": "lo0", "type": "loopback"}, "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_all_ipv6_addresses": ["fe80::250:56ff:fe9d:5be2%vic0", "2620:b0:0:3::23"], "ansible_interfaces": ["lo0", "vic0"], "ansible_local": {}, "ansible_machine": "amd64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDpn65VAyOKf9FRdOILke6D+6pEp2WDtlXcQobXh4w50Zid/pG4B3t1cGjaPelycQBMOPyT8qnpDnv0QxPvaQdLjgRsl44+jObWT+pakM+7r+TW8o/5st9W/Dkfz/fQeQrvuDpGeiV443V9CvOcl2j/fA/J4yh/JCzSmXFFO+kp/6BUKD0e6eldyw7iaS6LkxwGbZNLsDx3OHD7WmLRva8BMgUHQN0HoELdPSHI3q9Mqy5dJcsLjZwn0hV4kgP3RKWEvJq9Jm5VgsRbHl8Z1R+P74zBjBZupGCM/Iqu1SIUESfNtZ1btYU/BtScYyj2P4/nFC5eo472x0jAdMdjbXrP", "ansible_user_gecos": "Charlie &", "ansible_python": {"executable": "/usr/local/bin/python2.7", "version": {"micro": 16, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 16, "final", 0]}, "ansible_kernel": "6.6", "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_nodename": "rancid.merlinoffice.local", "ansible_system": "OpenBSD", "ansible_user_shell": "/bin/ksh", "ansible_all_ipv4_addresses": ["10.1.1.23"], "ansible_python_version": "2.7.16"}}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local]
META: ran handlers
TASK [edit path=/etc/syslog.conf, line=*,mark.* @{{ log_host }}:55514] ***************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:6
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"msg": "", "diff": [{"after": "", "before_header": "/etc/syslog.conf (content)", "after_header": "/etc/syslog.conf (content)", "before": ""}, {"before_header": "/etc/syslog.conf (file attributes)", "after_header": "/etc/syslog.conf (file attributes)"}], "changed": false, "backup": "", "invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "backrefs": false, "insertafter": null, "path": "/etc/syslog.conf", "owner": null, "follow": false, "validate": null, "group": null, "insertbefore": null, "unsafe_writes": null, "create": false, "setype": null, "content": null, "serole": null, "state": "present", "selevel": null, "regexp": null, "line": "*,mark.* @graylog.merlinoffice.local:55514", "src": null, "seuser": null, "delimiter": null, "mode": null, "firstmatch": false, "attributes": null, "backup": false}}}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local] => {
"backup": "",
"changed": false,
"diff": [
{
"after": "",
"after_header": "/etc/syslog.conf (content)",
"before": "",
"before_header": "/etc/syslog.conf (content)"
},
{
"after_header": "/etc/syslog.conf (file attributes)",
"before_header": "/etc/syslog.conf (file attributes)"
}
],
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": false,
"content": null,
"create": false,
"delimiter": null,
"directory_mode": null,
"firstmatch": false,
"follow": false,
"force": null,
"group": null,
"insertafter": null,
"insertbefore": null,
"line": "*,mark.* @graylog.merlinoffice.local:55514",
"mode": null,
"owner": null,
"path": "/etc/syslog.conf",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
TASK [adjust syslogd settings state=started, name=syslogd, arguments=-Zhr] ***********************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:11
Running service
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/service.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"name": "syslogd", "pattern": null, "enabled": null, "state": "started", "sleep": null, "arguments": "-Zhr", "runlevel": "default"}}, "state": "started", "changed": false, "name": "syslogd"}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local] => {
"changed": false,
"invocation": {
"module_args": {
"arguments": "-Zhr",
"enabled": null,
"name": "syslogd",
"pattern": null,
"runlevel": "default",
"sleep": null,
"state": "started"
}
},
"name": "syslogd",
"state": "started"
}
TASK [adjust syslogd settings and restart state=restarted, name=syslogd] *************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:17
Running service
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/service.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"name": "syslogd", "pattern": null, "enabled": null, "state": "restarted", "sleep": null, "arguments": "", "runlevel": "default"}}, "state": "started", "changed": true, "name": "syslogd"}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [rancid.merlinoffice.local] => {
"changed": true,
"invocation": {
"module_args": {
"arguments": "",
"enabled": null,
"name": "syslogd",
"pattern": null,
"runlevel": "default",
"sleep": null,
"state": "restarted"
}
},
"name": "syslogd",
"state": "started"
}
META: ran handlers
META: ran handlers
PLAY RECAP ***************************************************************************************************************************************************************************************
rancid.merlinoffice.local : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66434
|
https://github.com/ansible/ansible/pull/80628
|
d5e2e7a0a8ca9017a091922648430374539f878b
|
9bd698b3a78ad9abc9d0b1775d8f67747a13b295
| 2020-01-13T20:09:45Z |
python
| 2023-04-27T20:15:24Z |
changelogs/fragments/service_fix_obsd.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,434 |
OpenBSD service_flags do not get set
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "service" module somehow utterly fails to handle the "arguments" parameter on OpenBSD targets, never detecting any change, and so never doing anything.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
service module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 99
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = smart
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DISPLAY_ARGS_TO_STDOUT(/etc/ansible/ansible.cfg) = True
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
SHOW_CUSTOM_STATS(/etc/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
Ansible controller: CentOS 7.7.1908
Target: OpenBSD 6.6-STABLE
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
[root@ansible playbooks]# cat openbsd-syslog.yml
---
- hosts: openbsd
remote_user: root
tasks:
- name: edit
lineinfile:
path: /etc/syslog.conf
line: '*,mark.* @{{ log_host }}:55514'
- name: adjust syslogd settings
service:
name: syslogd
state: started
arguments: -Zhr
- name: adjust syslogd settings and restart
service:
name: syslogd
state: restarted
# vim:set ts=2 sw=2 nu cursorcolumn:
```
##### EXPECTED RESULTS
Should change /etc/rc.conf.local : syslogd_flags=-Zhr, by invoking "rcctl set syslogd flags -Zhr"
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[root@ansible playbooks]# ansible-playbook -vvvv openbsd-syslog.yml -l rancid.merlinoffice.local
ansible-playbook 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: openbsd-syslog.yml *********************************************************************************************************************************************************************
Positional arguments: openbsd-syslog.yml
subset: rancid.merlinoffice.local
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
forks: 99
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in openbsd-syslog.yml
PLAY [openbsd] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts gather_subset=['all'], gather_timeout=10] **********************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:2
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_product_serial": "VMware-42 1d 80 7b 59 37 e8 e3-7e c6 4c d0 83 10 25 94", "ansible_product_version": "None", "ansible_fips": false, "ansible_service_mgr": "bsdinit", "ansible_user_id": "root", "ansible_selinux_python_present": false, "ansible_memtotal_mb": 2031, "gather_subset": ["all"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEDCENHjwCqvucund+mo7WlTqg1JeF9rGwkY+R5My/6aF2FVi20+zSo81jA+HQRcdEyfYCAry7zhL1BNI5qDm/Q=", "ansible_kernel_version": "GENERIC.MP#3", "ansible_distribution_version": "6.6", "ansible_domain": "merlinoffice.local", "ansible_virtualization_type": "VMware", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIB8EEog+UQDB9WhZybSi3C1OaL2CXkjUS/M8rR+ZV0pZ", "ansible_processor_cores": "2", "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["10.1.1.5", "10.1.1.6"], "domain": "merlin.mb.ca", "search": ["merlin.mb.ca", "merlin.ca", "merlinoffice.local"]}, "ansible_effective_group_id": 0, "ansible_processor": ["Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz", "Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz"], "ansible_date_time": {"weekday_number": "1", "iso8601_basic_short": "20200113T140923", "tz": "CST", "weeknumber": "02", "hour": "14", "year": "2020", "minute": "09", "tz_offset": "-0600", "month": "01", "epoch": "1578946163", "iso8601_micro": "2020-01-13T20:09:23.896760Z", "weekday": "Monday", "time": "14:09:23", "date": "2020-01-13", "iso8601": "2020-01-13T20:09:23Z", "day": "13", "iso8601_basic": "20200113T140923896480", "second": "23"}, "ansible_userspace_bits": "64", "ansible_architecture": "amd64", "ansible_real_user_id": 0, "ansible_default_ipv4": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "network": "10.1.1.0", "media": "Ethernet", "mtu": "1500", "broadcast": "10.1.1.255", "interface": "vic0", "netmask": "255.255.255.0", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "address": "10.1.1.23", "device": "vic0", "media_select": "autoselect", "type": "ether", "gateway": "10.1.1.1"}, "ansible_default_ipv6": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "media": "Ethernet", "mtu": "1500", "interface": "vic0", "prefix": "64", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "address": "fe80::250:56ff:fe9d:5be2%vic0", "device": "vic0", "scope": "0x1", "media_select": "autoselect", "type": "ether", "gateway": "2620:b0:0:3::1"}, "ansible_user_gid": 0, "ansible_system_vendor": "VMware, Inc.", "ansible_apparmor": {"status": "disabled"}, "ansible_effective_user_id": 0, "ansible_distribution_release": "release", "ansible_mounts": [{"block_used": 2147744, "size_total": 21132113920, "block_total": 10318415, "mount": "/", "block_available": 8170671, "size_available": 16733534208, "fstype": "ffs", "inode_total": 2650366, "options": "rw,wxallowed,sync", "device": "cd72b15d8f2e1e30.a", "inode_used": 303019, "block_size": 16384, "inode_available": 2347347}, {"block_used": 8590976, "size_total": 21132113920, "block_total": 41273660, "mount": "/var/www/rancid", "block_available": 32682684, "size_available": 16733534208, "fstype": "nfs", "inode_total": 2650366, "options": "auto,ro,soft,intr", "device": "localhost:/var/rancid", "inode_used": 303019, "block_size": 8192, "inode_available": 2347347}], "ansible_selinux": {"status": "Missing selinux Python library"}, "ansible_os_family": "OpenBSD", "ansible_vic0": {"status": "active", "macaddress": "00:50:56:9d:5b:e2", "media": "Ethernet", "mtu": "1500", "flags": ["UP", "BROADCAST", "RUNNING", "SIMPLEX", "MULTICAST"], "ipv4": [{"broadcast": "10.1.1.255", "netmask": "255.255.255.0", "network": "10.1.1.0", "address": "10.1.1.23"}], "ipv6": [{"scope": "0x1", "prefix": "64", "address": "fe80::250:56ff:fe9d:5be2%vic0"}, {"prefix": "64", "address": "2620:b0:0:3::23"}], "device": "vic0", "media_select": "autoselect", "type": "ether"}, "ansible_product_uuid": "421d807b-5937-e8e3-7ec6-4cd083102594", "ansible_fqdn": "rancid.merlinoffice.local", "ansible_product_name": "VMware Virtual Platform", "ansible_pkg_mgr": "openbsd_pkg", "ansible_memfree_mb": 0, "ansible_devices": ["sd0:cd72b15d8f2e1e30"], "ansible_user_uid": 0, "ansible_fibre_channel_wwn": [], "ansible_distribution": "OpenBSD", "ansible_user_dir": "/root", "ansible_env": {"SHELL": "/bin/ksh", "SSH_CLIENT": "2620:b0:0:3::41 40830 22", "PWD": "/root", "LOGNAME": "root", "USER": "root", "PATH": "/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin", "MAIL": "/var/mail/root", "SSH_CONNECTION": "2620:b0:0:3::41 40830 2620:b0:0:3::23 22", "HOME": "/root", "_": "/bin/sh"}, "ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBAMLzajIzlt2GEKtoldLCZKFz65LUgTb3auiqJMewPTu10gwYrAUkt3XUX1lqXxw9114eMe0Adema30fUL6cOtJ8SWVEMaBaL9Sj7dogXRKNllGP8neebFYzOwE6Sx6GeROfJ+YxAmLSrot1mi3QzxUuKzggi3W8+1HefV+aIQB/DAAAAFQDJfqp7krfqsLxJMwZdjDKkndNIdwAAAIEApdkWCeKvKWqWIRHxA5t+dfzDYYba7QQ/rwu4WKS484PBxClUHOQtEVlkr9Y7VEuFWieDNJeqGDMR72h6j4biQgQsguA6bifhMk4+7A8FyeN/0jjH1m++tFyddwaFecXKnFQof6S/oazAbZMGsazCj3QZHKvgPrLxlQgIbXZ9sIUAAACBAK5Kf1jqJUYCwc5GV6LbPJC6bu1hFM3mRbvsEbZdYhCJFGVaQCe4SdM5EUEb9KogTctQlnGV6ZFuyOZ7s7zgnGAYrr62vi2tYpcbdy5EPZOlLdz0jFT+03P/CqCC8QKcesC5Te1i8IwbjXt2XpKRCm8ToCc469eG8Lo3UiDKXIJT", "module_setup": true, "ansible_iscsi_iqn": "", "ansible_hostname": "rancid", "ansible_processor_count": "2", "ansible_lo0": {"macaddress": "unknown", "mtu": "32768", "flags": ["UP", "LOOPBACK", "RUNNING", "MULTICAST"], "ipv4": [{"broadcast": "127.255.255.255", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}], "ipv6": [{"prefix": "128", "address": "::1"}, {"scope": "0x3", "prefix": "64", "address": "fe80::1%lo0"}], "device": "lo0", "type": "loopback"}, "ansible_real_group_id": 0, "ansible_lsb": {}, "ansible_all_ipv6_addresses": ["fe80::250:56ff:fe9d:5be2%vic0", "2620:b0:0:3::23"], "ansible_interfaces": ["lo0", "vic0"], "ansible_local": {}, "ansible_machine": "amd64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDpn65VAyOKf9FRdOILke6D+6pEp2WDtlXcQobXh4w50Zid/pG4B3t1cGjaPelycQBMOPyT8qnpDnv0QxPvaQdLjgRsl44+jObWT+pakM+7r+TW8o/5st9W/Dkfz/fQeQrvuDpGeiV443V9CvOcl2j/fA/J4yh/JCzSmXFFO+kp/6BUKD0e6eldyw7iaS6LkxwGbZNLsDx3OHD7WmLRva8BMgUHQN0HoELdPSHI3q9Mqy5dJcsLjZwn0hV4kgP3RKWEvJq9Jm5VgsRbHl8Z1R+P74zBjBZupGCM/Iqu1SIUESfNtZ1btYU/BtScYyj2P4/nFC5eo472x0jAdMdjbXrP", "ansible_user_gecos": "Charlie &", "ansible_python": {"executable": "/usr/local/bin/python2.7", "version": {"micro": 16, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 16, "final", 0]}, "ansible_kernel": "6.6", "ansible_is_chroot": false, "ansible_hostnqn": "", "ansible_nodename": "rancid.merlinoffice.local", "ansible_system": "OpenBSD", "ansible_user_shell": "/bin/ksh", "ansible_all_ipv4_addresses": ["10.1.1.23"], "ansible_python_version": "2.7.16"}}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local]
META: ran handlers
TASK [edit path=/etc/syslog.conf, line=*,mark.* @{{ log_host }}:55514] ***************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:6
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/lineinfile.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"msg": "", "diff": [{"after": "", "before_header": "/etc/syslog.conf (content)", "after_header": "/etc/syslog.conf (content)", "before": ""}, {"before_header": "/etc/syslog.conf (file attributes)", "after_header": "/etc/syslog.conf (file attributes)"}], "changed": false, "backup": "", "invocation": {"module_args": {"directory_mode": null, "force": null, "remote_src": null, "backrefs": false, "insertafter": null, "path": "/etc/syslog.conf", "owner": null, "follow": false, "validate": null, "group": null, "insertbefore": null, "unsafe_writes": null, "create": false, "setype": null, "content": null, "serole": null, "state": "present", "selevel": null, "regexp": null, "line": "*,mark.* @graylog.merlinoffice.local:55514", "src": null, "seuser": null, "delimiter": null, "mode": null, "firstmatch": false, "attributes": null, "backup": false}}}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local] => {
"backup": "",
"changed": false,
"diff": [
{
"after": "",
"after_header": "/etc/syslog.conf (content)",
"before": "",
"before_header": "/etc/syslog.conf (content)"
},
{
"after_header": "/etc/syslog.conf (file attributes)",
"before_header": "/etc/syslog.conf (file attributes)"
}
],
"invocation": {
"module_args": {
"attributes": null,
"backrefs": false,
"backup": false,
"content": null,
"create": false,
"delimiter": null,
"directory_mode": null,
"firstmatch": false,
"follow": false,
"force": null,
"group": null,
"insertafter": null,
"insertbefore": null,
"line": "*,mark.* @graylog.merlinoffice.local:55514",
"mode": null,
"owner": null,
"path": "/etc/syslog.conf",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null,
"validate": null
}
},
"msg": ""
}
TASK [adjust syslogd settings state=started, name=syslogd, arguments=-Zhr] ***********************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:11
Running service
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/service.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"name": "syslogd", "pattern": null, "enabled": null, "state": "started", "sleep": null, "arguments": "-Zhr", "runlevel": "default"}}, "state": "started", "changed": false, "name": "syslogd"}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
ok: [rancid.merlinoffice.local] => {
"changed": false,
"invocation": {
"module_args": {
"arguments": "-Zhr",
"enabled": null,
"name": "syslogd",
"pattern": null,
"runlevel": "default",
"sleep": null,
"state": "started"
}
},
"name": "syslogd",
"state": "started"
}
TASK [adjust syslogd settings and restart state=restarted, name=syslogd] *************************************************************************************************************************
task path: /etc/ansible/playbooks/openbsd-syslog.yml:17
Running service
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/service.py
Pipelining is enabled.
<rancid.merlinoffice.local> ESTABLISH SSH CONNECTION FOR USER: root
<rancid.merlinoffice.local> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="root"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/6a9df0150f rancid.merlinoffice.local '/bin/sh -c '"'"'/usr/local/bin/python2.7 && sleep 0'"'"''
<rancid.merlinoffice.local> (0, '\n{"invocation": {"module_args": {"name": "syslogd", "pattern": null, "enabled": null, "state": "restarted", "sleep": null, "arguments": "", "runlevel": "default"}}, "state": "started", "changed": true, "name": "syslogd"}\n', 'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 19690\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [rancid.merlinoffice.local] => {
"changed": true,
"invocation": {
"module_args": {
"arguments": "",
"enabled": null,
"name": "syslogd",
"pattern": null,
"runlevel": "default",
"sleep": null,
"state": "restarted"
}
},
"name": "syslogd",
"state": "started"
}
META: ran handlers
META: ran handlers
PLAY RECAP ***************************************************************************************************************************************************************************************
rancid.merlinoffice.local : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66434
|
https://github.com/ansible/ansible/pull/80628
|
d5e2e7a0a8ca9017a091922648430374539f878b
|
9bd698b3a78ad9abc9d0b1775d8f67747a13b295
| 2020-01-13T20:09:45Z |
python
| 2023-04-27T20:15:24Z |
lib/ansible/modules/service.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: service
version_added: "0.1"
short_description: Manage services
description:
- Controls services on remote hosts. Supported init systems include BSD init,
OpenRC, SysV, Solaris SMF, systemd, upstart.
- This module acts as a proxy to the underlying service manager module. While all arguments will be passed to the
underlying module, not all modules support the same arguments. This documentation only covers the minimum intersection
of module arguments that all service manager modules support.
- This module is a proxy for multiple more specific service manager modules
(such as M(ansible.builtin.systemd) and M(ansible.builtin.sysvinit)).
This allows management of a heterogeneous environment of machines without creating a specific task for
each service manager. The module to be executed is determined by the I(use) option, which defaults to the
service manager discovered by M(ansible.builtin.setup). If C(setup) was not yet run, this module may run it.
- For Windows targets, use the M(ansible.windows.win_service) module instead.
options:
name:
description:
- Name of the service.
type: str
required: true
state:
description:
- C(started)/C(stopped) are idempotent actions that will not run
commands unless necessary.
- C(restarted) will always bounce the service.
- C(reloaded) will always reload.
- B(At least one of state and enabled are required.)
- Note that reloaded will start the service if it is not already started,
even if your chosen init system wouldn't normally.
type: str
choices: [ reloaded, restarted, started, stopped ]
sleep:
description:
- If the service is being C(restarted) then sleep this many seconds
between the stop and start command.
- This helps to work around badly-behaving init scripts that exit immediately
after signaling a process to stop.
- Not all service managers support sleep, i.e when using systemd this setting will be ignored.
type: int
version_added: "1.3"
pattern:
description:
- If the service does not respond to the status command, name a
substring to look for as would be found in the output of the I(ps)
command as a stand-in for a status result.
- If the string is found, the service will be assumed to be started.
- While using remote hosts with systemd this setting will be ignored.
type: str
version_added: "0.7"
enabled:
description:
- Whether the service should start on boot.
- B(At least one of state and enabled are required.)
type: bool
runlevel:
description:
- For OpenRC init scripts (e.g. Gentoo) only.
- The runlevel that this service belongs to.
- While using remote hosts with systemd this setting will be ignored.
type: str
default: default
arguments:
description:
- Additional arguments provided on the command line.
- While using remote hosts with systemd this setting will be ignored.
type: str
default: ''
aliases: [ args ]
use:
description:
- The service module actually uses system specific modules, normally through auto detection, this setting can force a specific module.
- Normally it uses the value of the 'ansible_service_mgr' fact and falls back to the old 'service' module when none matching is found.
type: str
default: auto
version_added: 2.2
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
support: full
async:
support: full
bypass_host_loop:
support: none
check_mode:
details: support depends on the underlying plugin invoked
support: N/A
diff_mode:
details: support depends on the underlying plugin invoked
support: N/A
platform:
details: The support depends on the availability for the specific plugin for each platform and if fact gathering is able to detect it
platforms: all
notes:
- For AIX, group subsystem names can be used.
seealso:
- module: ansible.windows.win_service
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Start service httpd, if not started
ansible.builtin.service:
name: httpd
state: started
- name: Stop service httpd, if started
ansible.builtin.service:
name: httpd
state: stopped
- name: Restart service httpd, in all cases
ansible.builtin.service:
name: httpd
state: restarted
- name: Reload service httpd, in all cases
ansible.builtin.service:
name: httpd
state: reloaded
- name: Enable service httpd, and not touch the state
ansible.builtin.service:
name: httpd
enabled: yes
- name: Start service foo, based on running process /usr/bin/foo
ansible.builtin.service:
name: foo
pattern: /usr/bin/foo
state: started
- name: Restart network service for interface eth0
ansible.builtin.service:
name: network
state: restarted
args: eth0
'''
RETURN = r'''#'''
import glob
import json
import os
import platform
import re
import select
import shlex
import subprocess
import tempfile
import time
# The distutils module is not shipped with SUNWPython on Solaris.
# It's in the SUNWPython-devel package which also contains development files
# that don't belong on production boxes. Since our Solaris code doesn't
# depend on LooseVersion, do not import it on Solaris.
if platform.system() != 'SunOS':
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.service import fail_if_missing
from ansible.module_utils.six import PY2, b
class Service(object):
"""
This is the generic Service manipulation class that is subclassed
based on platform.
A subclass should override the following action methods:-
- get_service_tools
- service_enable
- get_service_status
- service_control
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Service)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.state = module.params['state']
self.sleep = module.params['sleep']
self.pattern = module.params['pattern']
self.enable = module.params['enabled']
self.runlevel = module.params['runlevel']
self.changed = False
self.running = None
self.crashed = None
self.action = None
self.svc_cmd = None
self.svc_initscript = None
self.svc_initctl = None
self.enable_cmd = None
self.arguments = module.params.get('arguments', '')
self.rcconf_file = None
self.rcconf_key = None
self.rcconf_value = None
self.svc_change = False
# ===========================================
# Platform specific methods (must be replaced by subclass).
def get_service_tools(self):
self.module.fail_json(msg="get_service_tools not implemented on target platform")
def service_enable(self):
self.module.fail_json(msg="service_enable not implemented on target platform")
def get_service_status(self):
self.module.fail_json(msg="get_service_status not implemented on target platform")
def service_control(self):
self.module.fail_json(msg="service_control not implemented on target platform")
# ===========================================
# Generic methods that should be used on all platforms.
def execute_command(self, cmd, daemonize=False):
locale = get_best_parsable_locale(self.module)
lang_env = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
# Most things don't need to be daemonized
if not daemonize:
# chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale
return self.module.run_command(cmd, environ_update=lang_env)
# This is complex because daemonization is hard for people.
# What we do is daemonize a part of this module, the daemon runs the
# command, picks up the return code and output, and returns it to the
# main process.
pipe = os.pipe()
pid = os.fork()
if pid == 0:
os.close(pipe[0])
# Set stdin/stdout/stderr to /dev/null
fd = os.open(os.devnull, os.O_RDWR)
if fd != 0:
os.dup2(fd, 0)
if fd != 1:
os.dup2(fd, 1)
if fd != 2:
os.dup2(fd, 2)
if fd not in (0, 1, 2):
os.close(fd)
# Make us a daemon. Yes, that's all it takes.
pid = os.fork()
if pid > 0:
os._exit(0)
os.setsid()
os.chdir("/")
pid = os.fork()
if pid > 0:
os._exit(0)
# Start the command
if PY2:
# Python 2.6's shlex.split can't handle text strings correctly
cmd = to_bytes(cmd, errors='surrogate_or_strict')
cmd = shlex.split(cmd)
else:
# Python3.x shex.split text strings.
cmd = to_text(cmd, errors='surrogate_or_strict')
cmd = [to_bytes(c, errors='surrogate_or_strict') for c in shlex.split(cmd)]
# In either of the above cases, pass a list of byte strings to Popen
# chkconfig localizes messages and we're screen scraping so make
# sure we use the C locale
p = subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=lang_env, preexec_fn=lambda: os.close(pipe[1]))
stdout = b("")
stderr = b("")
fds = [p.stdout, p.stderr]
# Wait for all output, or until the main process is dead and its output is done.
while fds:
rfd, wfd, efd = select.select(fds, [], fds, 1)
if not (rfd + wfd + efd) and p.poll() is not None:
break
if p.stdout in rfd:
dat = os.read(p.stdout.fileno(), 4096)
if not dat:
fds.remove(p.stdout)
stdout += dat
if p.stderr in rfd:
dat = os.read(p.stderr.fileno(), 4096)
if not dat:
fds.remove(p.stderr)
stderr += dat
p.wait()
# Return a JSON blob to parent
blob = json.dumps([p.returncode, to_text(stdout), to_text(stderr)])
os.write(pipe[1], to_bytes(blob, errors='surrogate_or_strict'))
os.close(pipe[1])
os._exit(0)
elif pid == -1:
self.module.fail_json(msg="unable to fork")
else:
os.close(pipe[1])
os.waitpid(pid, 0)
# Wait for data from daemon process and process it.
data = b("")
while True:
rfd, wfd, efd = select.select([pipe[0]], [], [pipe[0]])
if pipe[0] in rfd:
dat = os.read(pipe[0], 4096)
if not dat:
break
data += dat
return json.loads(to_text(data, errors='surrogate_or_strict'))
def check_ps(self):
# Set ps flags
if platform.system() == 'SunOS':
psflags = '-ef'
else:
psflags = 'auxww'
# Find ps binary
psbin = self.module.get_bin_path('ps', True)
(rc, psout, pserr) = self.execute_command('%s %s' % (psbin, psflags))
# If rc is 0, set running as appropriate
if rc == 0:
self.running = False
lines = psout.split("\n")
for line in lines:
if self.pattern in line and "pattern=" not in line:
# so as to not confuse ./hacking/test-module.py
self.running = True
break
def check_service_changed(self):
if self.state and self.running is None:
self.module.fail_json(msg="failed determining service state, possible typo of service name?")
# Find out if state has changed
if not self.running and self.state in ["reloaded", "started"]:
self.svc_change = True
elif self.running and self.state in ["reloaded", "stopped"]:
self.svc_change = True
elif self.state == "restarted":
self.svc_change = True
if self.module.check_mode and self.svc_change:
self.module.exit_json(changed=True, msg='service state changed')
def modify_service_state(self):
# Only do something if state will change
if self.svc_change:
# Control service
if self.state in ['started']:
self.action = "start"
elif not self.running and self.state == 'reloaded':
self.action = "start"
elif self.state == 'stopped':
self.action = "stop"
elif self.state == 'reloaded':
self.action = "reload"
elif self.state == 'restarted':
self.action = "restart"
if self.module.check_mode:
self.module.exit_json(changed=True, msg='changing service state')
return self.service_control()
else:
# If nothing needs to change just say all is well
rc = 0
err = ''
out = ''
return rc, out, err
def service_enable_rcconf(self):
if self.rcconf_file is None or self.rcconf_key is None or self.rcconf_value is None:
self.module.fail_json(msg="service_enable_rcconf() requires rcconf_file, rcconf_key and rcconf_value")
self.changed = None
entry = '%s="%s"\n' % (self.rcconf_key, self.rcconf_value)
with open(self.rcconf_file, "r") as RCFILE:
new_rc_conf = []
# Build a list containing the possibly modified file.
for rcline in RCFILE:
# Parse line removing whitespaces, quotes, etc.
rcarray = shlex.split(rcline, comments=True)
if len(rcarray) >= 1 and '=' in rcarray[0]:
(key, value) = rcarray[0].split("=", 1)
if key == self.rcconf_key:
if value.upper() == self.rcconf_value:
# Since the proper entry already exists we can stop iterating.
self.changed = False
break
else:
# We found the key but the value is wrong, replace with new entry.
rcline = entry
self.changed = True
# Add line to the list.
new_rc_conf.append(rcline.strip() + '\n')
# If we did not see any trace of our entry we need to add it.
if self.changed is None:
new_rc_conf.append(entry)
self.changed = True
if self.changed is True:
if self.module.check_mode:
self.module.exit_json(changed=True, msg="changing service enablement")
# Create a temporary file next to the current rc.conf (so we stay on the same filesystem).
# This way the replacement operation is atomic.
rcconf_dir = os.path.dirname(self.rcconf_file)
rcconf_base = os.path.basename(self.rcconf_file)
(TMP_RCCONF, tmp_rcconf_file) = tempfile.mkstemp(dir=rcconf_dir, prefix="%s-" % rcconf_base)
# Write out the contents of the list into our temporary file.
for rcline in new_rc_conf:
os.write(TMP_RCCONF, rcline.encode())
# Close temporary file.
os.close(TMP_RCCONF)
# Replace previous rc.conf.
self.module.atomic_move(tmp_rcconf_file, self.rcconf_file)
class LinuxService(Service):
"""
This is the Linux Service manipulation class - it is currently supporting
a mixture of binaries and init scripts for controlling services started at
boot, as well as for controlling the current state.
"""
platform = 'Linux'
distribution = None
def get_service_tools(self):
paths = ['/sbin', '/usr/sbin', '/bin', '/usr/bin']
binaries = ['service', 'chkconfig', 'update-rc.d', 'rc-service', 'rc-update', 'initctl', 'systemctl', 'start', 'stop', 'restart', 'insserv']
initpaths = ['/etc/init.d']
location = dict()
for binary in binaries:
location[binary] = self.module.get_bin_path(binary, opt_dirs=paths)
for initdir in initpaths:
initscript = "%s/%s" % (initdir, self.name)
if os.path.isfile(initscript):
self.svc_initscript = initscript
def check_systemd():
# tools must be installed
if location.get('systemctl', False):
# this should show if systemd is the boot init system
# these mirror systemd's own sd_boot test http://www.freedesktop.org/software/systemd/man/sd_booted.html
for canary in ["/run/systemd/system/", "/dev/.run/systemd/", "/dev/.systemd/"]:
if os.path.exists(canary):
return True
# If all else fails, check if init is the systemd command, using comm as cmdline could be symlink
try:
f = open('/proc/1/comm', 'r')
except IOError:
# If comm doesn't exist, old kernel, no systemd
return False
for line in f:
if 'systemd' in line:
return True
return False
# Locate a tool to enable/disable a service
if check_systemd():
# service is managed by systemd
self.__systemd_unit = self.name
self.svc_cmd = location['systemctl']
self.enable_cmd = location['systemctl']
elif location.get('initctl', False) and os.path.exists("/etc/init/%s.conf" % self.name):
# service is managed by upstart
self.enable_cmd = location['initctl']
# set the upstart version based on the output of 'initctl version'
self.upstart_version = LooseVersion('0.0.0')
try:
version_re = re.compile(r'\(upstart (.*)\)')
rc, stdout, stderr = self.module.run_command('%s version' % location['initctl'])
if rc == 0:
res = version_re.search(stdout)
if res:
self.upstart_version = LooseVersion(res.groups()[0])
except Exception:
pass # we'll use the default of 0.0.0
self.svc_cmd = location['initctl']
elif location.get('rc-service', False):
# service is managed by OpenRC
self.svc_cmd = location['rc-service']
self.enable_cmd = location['rc-update']
return # already have service start/stop tool too!
elif self.svc_initscript:
# service is managed by with SysV init scripts
if location.get('update-rc.d', False):
# and uses update-rc.d
self.enable_cmd = location['update-rc.d']
elif location.get('insserv', None):
# and uses insserv
self.enable_cmd = location['insserv']
elif location.get('chkconfig', False):
# and uses chkconfig
self.enable_cmd = location['chkconfig']
if self.enable_cmd is None:
fail_if_missing(self.module, False, self.name, msg='host')
# If no service control tool selected yet, try to see if 'service' is available
if self.svc_cmd is None and location.get('service', False):
self.svc_cmd = location['service']
# couldn't find anything yet
if self.svc_cmd is None and not self.svc_initscript:
self.module.fail_json(msg='cannot find \'service\' binary or init script for service, possible typo in service name?, aborting')
if location.get('initctl', False):
self.svc_initctl = location['initctl']
def get_systemd_service_enabled(self):
def sysv_exists(name):
script = '/etc/init.d/' + name
return os.access(script, os.X_OK)
def sysv_is_enabled(name):
return bool(glob.glob('/etc/rc?.d/S??' + name))
service_name = self.__systemd_unit
(rc, out, err) = self.execute_command("%s is-enabled %s" % (self.enable_cmd, service_name,))
if rc == 0:
return True
elif out.startswith('disabled'):
return False
elif sysv_exists(service_name):
return sysv_is_enabled(service_name)
else:
return False
def get_systemd_status_dict(self):
# Check status first as show will not fail if service does not exist
(rc, out, err) = self.execute_command("%s show '%s'" % (self.enable_cmd, self.__systemd_unit,))
if rc != 0:
self.module.fail_json(msg='failure %d running systemctl show for %r: %s' % (rc, self.__systemd_unit, err))
elif 'LoadState=not-found' in out:
self.module.fail_json(msg='systemd could not find the requested service "%r": %s' % (self.__systemd_unit, err))
key = None
value_buffer = []
status_dict = {}
for line in out.splitlines():
if '=' in line:
if not key:
key, value = line.split('=', 1)
# systemd fields that are shell commands can be multi-line
# We take a value that begins with a "{" as the start of
# a shell command and a line that ends with "}" as the end of
# the command
if value.lstrip().startswith('{'):
if value.rstrip().endswith('}'):
status_dict[key] = value
key = None
else:
value_buffer.append(value)
else:
status_dict[key] = value
key = None
else:
if line.rstrip().endswith('}'):
status_dict[key] = '\n'.join(value_buffer)
key = None
else:
value_buffer.append(value)
else:
value_buffer.append(value)
return status_dict
def get_systemd_service_status(self):
d = self.get_systemd_status_dict()
if d.get('ActiveState') == 'active':
# run-once services (for which a single successful exit indicates
# that they are running as designed) should not be restarted here.
# Thus, we are not checking d['SubState'].
self.running = True
self.crashed = False
elif d.get('ActiveState') == 'failed':
self.running = False
self.crashed = True
elif d.get('ActiveState') is None:
self.module.fail_json(msg='No ActiveState value in systemctl show output for %r' % (self.__systemd_unit,))
else:
self.running = False
self.crashed = False
return self.running
def get_service_status(self):
if self.svc_cmd and self.svc_cmd.endswith('systemctl'):
return self.get_systemd_service_status()
self.action = "status"
rc, status_stdout, status_stderr = self.service_control()
# if we have decided the service is managed by upstart, we check for some additional output...
if self.svc_initctl and self.running is None:
# check the job status by upstart response
initctl_rc, initctl_status_stdout, initctl_status_stderr = self.execute_command("%s status %s %s" % (self.svc_initctl, self.name, self.arguments))
if "stop/waiting" in initctl_status_stdout:
self.running = False
elif "start/running" in initctl_status_stdout:
self.running = True
if self.svc_cmd and self.svc_cmd.endswith("rc-service") and self.running is None:
openrc_rc, openrc_status_stdout, openrc_status_stderr = self.execute_command("%s %s status" % (self.svc_cmd, self.name))
self.running = "started" in openrc_status_stdout
self.crashed = "crashed" in openrc_status_stderr
# Prefer a non-zero return code. For reference, see:
# http://refspecs.linuxbase.org/LSB_4.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
if self.running is None and rc in [1, 2, 3, 4, 69]:
self.running = False
# if the job status is still not known check it by status output keywords
# Only check keywords if there's only one line of output (some init
# scripts will output verbosely in case of error and those can emit
# keywords that are picked up as false positives
if self.running is None and status_stdout.count('\n') <= 1:
# first transform the status output that could irritate keyword matching
cleanout = status_stdout.lower().replace(self.name.lower(), '')
if "stop" in cleanout:
self.running = False
elif "run" in cleanout:
self.running = not ("not " in cleanout)
elif "start" in cleanout and "not " not in cleanout:
self.running = True
elif 'could not access pid file' in cleanout:
self.running = False
elif 'is dead and pid file exists' in cleanout:
self.running = False
elif 'dead but subsys locked' in cleanout:
self.running = False
elif 'dead but pid file exists' in cleanout:
self.running = False
# if the job status is still not known and we got a zero for the
# return code, assume here that the service is running
if self.running is None and rc == 0:
self.running = True
# if the job status is still not known check it by special conditions
if self.running is None:
if self.name == 'iptables' and "ACCEPT" in status_stdout:
# iptables status command output is lame
# TODO: lookup if we can use a return code for this instead?
self.running = True
return self.running
def service_enable(self):
if self.enable_cmd is None:
self.module.fail_json(msg='cannot detect command to enable service %s, typo or init system potentially unknown' % self.name)
self.changed = True
action = None
#
# Upstart's initctl
#
if self.enable_cmd.endswith("initctl"):
def write_to_override_file(file_name, file_contents, ):
override_file = open(file_name, 'w')
override_file.write(file_contents)
override_file.close()
initpath = '/etc/init'
if self.upstart_version >= LooseVersion('0.6.7'):
manreg = re.compile(r'^manual\s*$', re.M | re.I)
config_line = 'manual\n'
else:
manreg = re.compile(r'^start on manual\s*$', re.M | re.I)
config_line = 'start on manual\n'
conf_file_name = "%s/%s.conf" % (initpath, self.name)
override_file_name = "%s/%s.override" % (initpath, self.name)
# Check to see if files contain the manual line in .conf and fail if True
with open(conf_file_name) as conf_file_fh:
conf_file_content = conf_file_fh.read()
if manreg.search(conf_file_content):
self.module.fail_json(msg="manual stanza not supported in a .conf file")
self.changed = False
if os.path.exists(override_file_name):
with open(override_file_name) as override_fh:
override_file_contents = override_fh.read()
# Remove manual stanza if present and service enabled
if self.enable and manreg.search(override_file_contents):
self.changed = True
override_state = manreg.sub('', override_file_contents)
# Add manual stanza if not present and service disabled
elif not (self.enable) and not (manreg.search(override_file_contents)):
self.changed = True
override_state = '\n'.join((override_file_contents, config_line))
# service already in desired state
else:
pass
# Add file with manual stanza if service disabled
elif not (self.enable):
self.changed = True
override_state = config_line
else:
# service already in desired state
pass
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
# The initctl method of enabling and disabling services is much
# different than for the other service methods. So actually
# committing the change is done in this conditional and then we
# skip the boilerplate at the bottom of the method
if self.changed:
try:
write_to_override_file(override_file_name, override_state)
except Exception:
self.module.fail_json(msg='Could not modify override file')
return
#
# SysV's chkconfig
#
if self.enable_cmd.endswith("chkconfig"):
if self.enable:
action = 'on'
else:
action = 'off'
(rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name))
if 'chkconfig --add %s' % self.name in err:
self.execute_command("%s --add %s" % (self.enable_cmd, self.name))
(rc, out, err) = self.execute_command("%s --list %s" % (self.enable_cmd, self.name))
if self.name not in out:
self.module.fail_json(msg="service %s does not support chkconfig" % self.name)
# TODO: look back on why this is here
# state = out.split()[-1]
# Check if we're already in the correct state
if "3:%s" % action in out and "5:%s" % action in out:
self.changed = False
return
#
# Systemd's systemctl
#
if self.enable_cmd.endswith("systemctl"):
if self.enable:
action = 'enable'
else:
action = 'disable'
# Check if we're already in the correct state
service_enabled = self.get_systemd_service_enabled()
# self.changed should already be true
if self.enable == service_enabled:
self.changed = False
return
#
# OpenRC's rc-update
#
if self.enable_cmd.endswith("rc-update"):
if self.enable:
action = 'add'
else:
action = 'delete'
(rc, out, err) = self.execute_command("%s show" % self.enable_cmd)
for line in out.splitlines():
service_name, runlevels = line.split('|')
service_name = service_name.strip()
if service_name != self.name:
continue
runlevels = re.split(r'\s+', runlevels)
# service already enabled for the runlevel
if self.enable and self.runlevel in runlevels:
self.changed = False
# service already disabled for the runlevel
elif not self.enable and self.runlevel not in runlevels:
self.changed = False
break
else:
# service already disabled altogether
if not self.enable:
self.changed = False
if not self.changed:
return
#
# update-rc.d style
#
if self.enable_cmd.endswith("update-rc.d"):
enabled = False
slinks = glob.glob('/etc/rc?.d/S??' + self.name)
if slinks:
enabled = True
if self.enable != enabled:
self.changed = True
if self.enable:
action = 'enable'
klinks = glob.glob('/etc/rc?.d/K??' + self.name)
if not klinks:
if not self.module.check_mode:
(rc, out, err) = self.execute_command("%s %s defaults" % (self.enable_cmd, self.name))
if rc != 0:
if err:
self.module.fail_json(msg=err)
else:
self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action)
else:
action = 'disable'
if not self.module.check_mode:
(rc, out, err) = self.execute_command("%s %s %s" % (self.enable_cmd, self.name, action))
if rc != 0:
if err:
self.module.fail_json(msg=err)
else:
self.module.fail_json(msg=out) % (self.enable_cmd, self.name, action)
else:
self.changed = False
return
#
# insserv (Debian <=7, SLES, others)
#
if self.enable_cmd.endswith("insserv"):
if self.enable:
(rc, out, err) = self.execute_command("%s -n -v %s" % (self.enable_cmd, self.name))
else:
(rc, out, err) = self.execute_command("%s -n -r -v %s" % (self.enable_cmd, self.name))
self.changed = False
for line in err.splitlines():
if self.enable and line.find('enable service') != -1:
self.changed = True
break
if not self.enable and line.find('remove service') != -1:
self.changed = True
break
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
if not self.changed:
return
if self.enable:
(rc, out, err) = self.execute_command("%s %s" % (self.enable_cmd, self.name))
if (rc != 0) or (err != ''):
self.module.fail_json(msg=("Failed to install service. rc: %s, out: %s, err: %s" % (rc, out, err)))
return (rc, out, err)
else:
(rc, out, err) = self.execute_command("%s -r %s" % (self.enable_cmd, self.name))
if (rc != 0) or (err != ''):
self.module.fail_json(msg=("Failed to remove service. rc: %s, out: %s, err: %s" % (rc, out, err)))
return (rc, out, err)
#
# If we've gotten to the end, the service needs to be updated
#
self.changed = True
# we change argument order depending on real binary used:
# rc-update and systemctl need the argument order reversed
if self.enable_cmd.endswith("rc-update"):
args = (self.enable_cmd, action, self.name + " " + self.runlevel)
elif self.enable_cmd.endswith("systemctl"):
args = (self.enable_cmd, action, self.__systemd_unit)
else:
args = (self.enable_cmd, self.name, action)
if self.module.check_mode:
self.module.exit_json(changed=self.changed)
(rc, out, err) = self.execute_command("%s %s %s" % args)
if rc != 0:
if err:
self.module.fail_json(msg="Error when trying to %s %s: rc=%s %s" % (action, self.name, rc, err))
else:
self.module.fail_json(msg="Failure for %s %s: rc=%s %s" % (action, self.name, rc, out))
return (rc, out, err)
def service_control(self):
# Decide what command to run
svc_cmd = ''
arguments = self.arguments
if self.svc_cmd:
if not self.svc_cmd.endswith("systemctl"):
if self.svc_cmd.endswith("initctl"):
# initctl commands take the form <cmd> <action> <name>
svc_cmd = self.svc_cmd
arguments = "%s %s" % (self.name, arguments)
else:
# SysV and OpenRC take the form <cmd> <name> <action>
svc_cmd = "%s %s" % (self.svc_cmd, self.name)
else:
# systemd commands take the form <cmd> <action> <name>
svc_cmd = self.svc_cmd
arguments = "%s %s" % (self.__systemd_unit, arguments)
elif self.svc_cmd is None and self.svc_initscript:
# upstart
svc_cmd = "%s" % self.svc_initscript
# In OpenRC, if a service crashed, we need to reset its status to
# stopped with the zap command, before we can start it back.
if self.svc_cmd and self.svc_cmd.endswith('rc-service') and self.action == 'start' and self.crashed:
self.execute_command("%s zap" % svc_cmd, daemonize=True)
if self.action != "restart":
if svc_cmd != '':
# upstart or systemd or OpenRC
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True)
else:
# SysV
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (self.action, self.name, arguments), daemonize=True)
elif self.svc_cmd and self.svc_cmd.endswith('rc-service'):
# All services in OpenRC support restart.
rc_state, stdout, stderr = self.execute_command("%s %s %s" % (svc_cmd, self.action, arguments), daemonize=True)
else:
# In other systems, not all services support restart. Do it the hard way.
if svc_cmd != '':
# upstart or systemd
rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % (svc_cmd, 'stop', arguments), daemonize=True)
else:
# SysV
rc1, stdout1, stderr1 = self.execute_command("%s %s %s" % ('stop', self.name, arguments), daemonize=True)
if self.sleep:
time.sleep(self.sleep)
if svc_cmd != '':
# upstart or systemd
rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % (svc_cmd, 'start', arguments), daemonize=True)
else:
# SysV
rc2, stdout2, stderr2 = self.execute_command("%s %s %s" % ('start', self.name, arguments), daemonize=True)
# merge return information
if rc1 != 0 and rc2 == 0:
rc_state = rc2
stdout = stdout2
stderr = stderr2
else:
rc_state = rc1 + rc2
stdout = stdout1 + stdout2
stderr = stderr1 + stderr2
return (rc_state, stdout, stderr)
class FreeBsdService(Service):
"""
This is the FreeBSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot and the 'service' binary to
check status and perform direct service manipulation.
"""
platform = 'FreeBSD'
distribution = None
def get_service_tools(self):
self.svc_cmd = self.module.get_bin_path('service', True)
if not self.svc_cmd:
self.module.fail_json(msg='unable to find service binary')
self.sysrc_cmd = self.module.get_bin_path('sysrc')
def get_service_status(self):
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'onestatus', self.arguments))
if self.name == "pf":
self.running = "Enabled" in stdout
else:
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf', '/etc/rc.conf.local', '/usr/local/etc/rc.conf']
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, 'rcvar', self.arguments))
try:
rcvars = shlex.split(stdout, comments=True)
except Exception:
# TODO: add a warning to the output with the failure
pass
if not rcvars:
self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr)
# In rare cases, i.e. sendmail, rcvar can return several key=value pairs
# Usually there is just one, however. In other rare cases, i.e. uwsgi,
# rcvar can return extra uncommented data that is not at all related to
# the rcvar. We will just take the first key=value pair we come across
# and hope for the best.
for rcvar in rcvars:
if '=' in rcvar:
self.rcconf_key, default_rcconf_value = rcvar.split('=', 1)
break
if self.rcconf_key is None:
self.module.fail_json(msg="unable to determine rcvar", stdout=stdout, stderr=stderr)
if self.sysrc_cmd: # FreeBSD >= 9.2
rc, current_rcconf_value, stderr = self.execute_command("%s -n %s" % (self.sysrc_cmd, self.rcconf_key))
# it can happen that rcvar is not set (case of a system coming from the ports collection)
# so we will fallback on the default
if rc != 0:
current_rcconf_value = default_rcconf_value
if current_rcconf_value.strip().upper() != self.rcconf_value:
self.changed = True
if self.module.check_mode:
self.module.exit_json(changed=True, msg="changing service enablement")
rc, change_stdout, change_stderr = self.execute_command("%s %s=\"%s\"" % (self.sysrc_cmd, self.rcconf_key, self.rcconf_value))
if rc != 0:
self.module.fail_json(msg="unable to set rcvar using sysrc", stdout=change_stdout, stderr=change_stderr)
# sysrc does not exit with code 1 on permission error => validate successful change using service(8)
rc, check_stdout, check_stderr = self.execute_command("%s %s %s" % (self.svc_cmd, self.name, "enabled"))
if self.enable != (rc == 0): # rc = 0 indicates enabled service, rc = 1 indicates disabled service
self.module.fail_json(msg="unable to set rcvar: sysrc did not change value", stdout=change_stdout, stderr=change_stderr)
else:
self.changed = False
else: # Legacy (FreeBSD < 9.2)
try:
return self.service_enable_rcconf()
except Exception:
self.module.fail_json(msg='unable to set rcvar')
def service_control(self):
if self.action == "start":
self.action = "onestart"
if self.action == "stop":
self.action = "onestop"
if self.action == "reload":
self.action = "onereload"
ret = self.execute_command("%s %s %s %s" % (self.svc_cmd, self.name, self.action, self.arguments))
if self.sleep:
time.sleep(self.sleep)
return ret
class DragonFlyBsdService(FreeBsdService):
"""
This is the DragonFly BSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot and the 'service' binary to
check status and perform direct service manipulation.
"""
platform = 'DragonFly'
distribution = None
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf'] # Overkill?
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
self.rcconf_key = "%s" % self.name.replace("-", "_")
return self.service_enable_rcconf()
class OpenBsdService(Service):
"""
This is the OpenBSD Service manipulation class - it uses rcctl(8) or
/etc/rc.d scripts for service control. Enabling a service is
only supported if rcctl is present.
"""
platform = 'OpenBSD'
distribution = None
def get_service_tools(self):
self.enable_cmd = self.module.get_bin_path('rcctl')
if self.enable_cmd:
self.svc_cmd = self.enable_cmd
else:
rcdir = '/etc/rc.d'
rc_script = "%s/%s" % (rcdir, self.name)
if os.path.isfile(rc_script):
self.svc_cmd = rc_script
if not self.svc_cmd:
self.module.fail_json(msg='unable to find svc_cmd')
def get_service_status(self):
if self.enable_cmd:
rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svc_cmd, 'check', self.name))
else:
rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'check'))
if stderr:
self.module.fail_json(msg=stderr)
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_control(self):
if self.enable_cmd:
return self.execute_command("%s -f %s %s" % (self.svc_cmd, self.action, self.name), daemonize=True)
else:
return self.execute_command("%s -f %s" % (self.svc_cmd, self.action))
def service_enable(self):
if not self.enable_cmd:
return super(OpenBsdService, self).service_enable()
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'getdef', self.name, 'flags'))
if stderr:
self.module.fail_json(msg=stderr)
getdef_string = stdout.rstrip()
# Depending on the service the string returned from 'getdef' may be
# either a set of flags or the boolean YES/NO
if getdef_string == "YES" or getdef_string == "NO":
default_flags = ''
else:
default_flags = getdef_string
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'get', self.name, 'flags'))
if stderr:
self.module.fail_json(msg=stderr)
get_string = stdout.rstrip()
# Depending on the service the string returned from 'get' may be
# either a set of flags or the boolean YES/NO
if get_string == "YES" or get_string == "NO":
current_flags = ''
else:
current_flags = get_string
# If there are arguments from the user we use these as flags unless
# they are already set.
if self.arguments and self.arguments != current_flags:
changed_flags = self.arguments
# If the user has not supplied any arguments and the current flags
# differ from the default we reset them.
elif not self.arguments and current_flags != default_flags:
changed_flags = ' '
# Otherwise there is no need to modify flags.
else:
changed_flags = ''
rc, stdout, stderr = self.execute_command("%s %s %s %s" % (self.enable_cmd, 'get', self.name, 'status'))
if self.enable:
if rc == 0 and not changed_flags:
return
if rc != 0:
status_action = "set %s status on" % (self.name)
else:
status_action = ''
if changed_flags:
flags_action = "set %s flags %s" % (self.name, changed_flags)
else:
flags_action = ''
else:
if rc == 1:
return
status_action = "set %s status off" % self.name
flags_action = ''
# Verify state assumption
if not status_action and not flags_action:
self.module.fail_json(msg="neither status_action or status_flags is set, this should never happen")
if self.module.check_mode:
self.module.exit_json(changed=True, msg="changing service enablement")
status_modified = 0
if status_action:
rc, stdout, stderr = self.execute_command("%s %s" % (self.enable_cmd, status_action))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg="rcctl failed to modify service status")
status_modified = 1
if flags_action:
rc, stdout, stderr = self.execute_command("%s %s" % (self.enable_cmd, flags_action))
if rc != 0:
if stderr:
if status_modified:
error_message = "rcctl modified service status but failed to set flags: " + stderr
else:
error_message = stderr
else:
if status_modified:
error_message = "rcctl modified service status but failed to set flags"
else:
error_message = "rcctl failed to modify service flags"
self.module.fail_json(msg=error_message)
self.changed = True
class NetBsdService(Service):
"""
This is the NetBSD Service manipulation class - it uses the /etc/rc.conf
file for controlling services started at boot, check status and perform
direct service manipulation. Init scripts in /etc/rc.d are used for
controlling services (start/stop) as well as for controlling the current
state.
"""
platform = 'NetBSD'
distribution = None
def get_service_tools(self):
initpaths = ['/etc/rc.d'] # better: $rc_directories - how to get in here? Run: sh -c '. /etc/rc.conf ; echo $rc_directories'
for initdir in initpaths:
initscript = "%s/%s" % (initdir, self.name)
if os.path.isfile(initscript):
self.svc_initscript = initscript
if not self.svc_initscript:
self.module.fail_json(msg='unable to find rc.d script')
def service_enable(self):
if self.enable:
self.rcconf_value = "YES"
else:
self.rcconf_value = "NO"
rcfiles = ['/etc/rc.conf'] # Overkill?
for rcfile in rcfiles:
if os.path.isfile(rcfile):
self.rcconf_file = rcfile
self.rcconf_key = "%s" % self.name.replace("-", "_")
return self.service_enable_rcconf()
def get_service_status(self):
self.svc_cmd = "%s" % self.svc_initscript
rc, stdout, stderr = self.execute_command("%s %s" % (self.svc_cmd, 'onestatus'))
if rc == 1:
self.running = False
elif rc == 0:
self.running = True
def service_control(self):
if self.action == "start":
self.action = "onestart"
if self.action == "stop":
self.action = "onestop"
self.svc_cmd = "%s" % self.svc_initscript
return self.execute_command("%s %s" % (self.svc_cmd, self.action), daemonize=True)
class SunOSService(Service):
"""
This is the SunOS Service manipulation class - it uses the svcadm
command for controlling services, and svcs command for checking status.
It also tries to be smart about taking the service out of maintenance
state if necessary.
"""
platform = 'SunOS'
distribution = None
def get_service_tools(self):
self.svcs_cmd = self.module.get_bin_path('svcs', True)
if not self.svcs_cmd:
self.module.fail_json(msg='unable to find svcs binary')
self.svcadm_cmd = self.module.get_bin_path('svcadm', True)
if not self.svcadm_cmd:
self.module.fail_json(msg='unable to find svcadm binary')
if self.svcadm_supports_sync():
self.svcadm_sync = '-s'
else:
self.svcadm_sync = ''
def svcadm_supports_sync(self):
# Support for synchronous restart/refresh is only supported on
# Oracle Solaris >= 11.2
for line in open('/etc/release', 'r').readlines():
m = re.match(r'\s+Oracle Solaris (\d+)\.(\d+).*', line.rstrip())
if m and m.groups() >= ('11', '2'):
return True
def get_service_status(self):
status = self.get_sunos_svcs_status()
# Only 'online' is considered properly running. Everything else is off
# or has some sort of problem.
if status == 'online':
self.running = True
else:
self.running = False
def get_sunos_svcs_status(self):
rc, stdout, stderr = self.execute_command("%s %s" % (self.svcs_cmd, self.name))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
lines = stdout.rstrip("\n").split("\n")
status = lines[-1].split(" ")[0]
# status is one of: online, offline, degraded, disabled, maintenance, uninitialized
# see man svcs(1)
return status
def service_enable(self):
# Get current service enablement status
rc, stdout, stderr = self.execute_command("%s -l %s" % (self.svcs_cmd, self.name))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
enabled = False
temporary = False
# look for enabled line, which could be one of:
# enabled true (temporary)
# enabled false (temporary)
# enabled true
# enabled false
for line in stdout.split("\n"):
if line.startswith("enabled"):
if "true" in line:
enabled = True
if "temporary" in line:
temporary = True
startup_enabled = (enabled and not temporary) or (not enabled and temporary)
if self.enable and startup_enabled:
return
elif (not self.enable) and (not startup_enabled):
return
if not self.module.check_mode:
# Mark service as started or stopped (this will have the side effect of
# actually stopping or starting the service)
if self.enable:
subcmd = "enable -rs"
else:
subcmd = "disable -s"
rc, stdout, stderr = self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name))
if rc != 0:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
self.changed = True
def service_control(self):
status = self.get_sunos_svcs_status()
# if starting or reloading, clear maintenance states
if self.action in ['start', 'reload', 'restart'] and status in ['maintenance', 'degraded']:
rc, stdout, stderr = self.execute_command("%s clear %s" % (self.svcadm_cmd, self.name))
if rc != 0:
return rc, stdout, stderr
status = self.get_sunos_svcs_status()
if status in ['maintenance', 'degraded']:
self.module.fail_json(msg="Failed to bring service out of %s status." % status)
if self.action == 'start':
subcmd = "enable -rst"
elif self.action == 'stop':
subcmd = "disable -st"
elif self.action == 'reload':
subcmd = "refresh %s" % (self.svcadm_sync)
elif self.action == 'restart' and status == 'online':
subcmd = "restart %s" % (self.svcadm_sync)
elif self.action == 'restart' and status != 'online':
subcmd = "enable -rst"
return self.execute_command("%s %s %s" % (self.svcadm_cmd, subcmd, self.name))
class AIX(Service):
"""
This is the AIX Service (SRC) manipulation class - it uses lssrc, startsrc, stopsrc
and refresh for service control. Enabling a service is currently not supported.
Would require to add an entry in the /etc/inittab file (mkitab, chitab and rmitab
commands)
"""
platform = 'AIX'
distribution = None
def get_service_tools(self):
self.lssrc_cmd = self.module.get_bin_path('lssrc', True)
if not self.lssrc_cmd:
self.module.fail_json(msg='unable to find lssrc binary')
self.startsrc_cmd = self.module.get_bin_path('startsrc', True)
if not self.startsrc_cmd:
self.module.fail_json(msg='unable to find startsrc binary')
self.stopsrc_cmd = self.module.get_bin_path('stopsrc', True)
if not self.stopsrc_cmd:
self.module.fail_json(msg='unable to find stopsrc binary')
self.refresh_cmd = self.module.get_bin_path('refresh', True)
if not self.refresh_cmd:
self.module.fail_json(msg='unable to find refresh binary')
def get_service_status(self):
status = self.get_aix_src_status()
# Only 'active' is considered properly running. Everything else is off
# or has some sort of problem.
if status == 'active':
self.running = True
else:
self.running = False
def get_aix_src_status(self):
# Check subsystem status
rc, stdout, stderr = self.execute_command("%s -s %s" % (self.lssrc_cmd, self.name))
if rc == 1:
# If check for subsystem is not ok, check if service name is a
# group subsystem
rc, stdout, stderr = self.execute_command("%s -g %s" % (self.lssrc_cmd, self.name))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
else:
# Check all subsystem status, if one subsystem is not active
# the group is considered not active.
lines = stdout.splitlines()
for state in lines[1:]:
if state.split()[-1].strip() != "active":
status = state.split()[-1].strip()
break
else:
status = "active"
# status is one of: active, inoperative
return status
else:
lines = stdout.rstrip("\n").split("\n")
status = lines[-1].split(" ")[-1]
# status is one of: active, inoperative
return status
def service_control(self):
# Check if service name is a subsystem of a group subsystem
rc, stdout, stderr = self.execute_command("%s -a" % (self.lssrc_cmd))
if rc == 1:
if stderr:
self.module.fail_json(msg=stderr)
else:
self.module.fail_json(msg=stdout)
else:
lines = stdout.splitlines()
subsystems = []
groups = []
for line in lines[1:]:
subsystem = line.split()[0].strip()
group = line.split()[1].strip()
subsystems.append(subsystem)
if group:
groups.append(group)
# Define if service name parameter:
# -s subsystem or -g group subsystem
if self.name in subsystems:
srccmd_parameter = "-s"
elif self.name in groups:
srccmd_parameter = "-g"
if self.action == 'start':
srccmd = self.startsrc_cmd
elif self.action == 'stop':
srccmd = self.stopsrc_cmd
elif self.action == 'reload':
srccmd = self.refresh_cmd
elif self.action == 'restart':
self.execute_command("%s %s %s" % (self.stopsrc_cmd, srccmd_parameter, self.name))
if self.sleep:
time.sleep(self.sleep)
srccmd = self.startsrc_cmd
if self.arguments and self.action in ('start', 'restart'):
return self.execute_command("%s -a \"%s\" %s %s" % (srccmd, self.arguments, srccmd_parameter, self.name))
else:
return self.execute_command("%s %s %s" % (srccmd, srccmd_parameter, self.name))
# ===========================================
# Main control flow
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
state=dict(type='str', choices=['started', 'stopped', 'reloaded', 'restarted']),
sleep=dict(type='int'),
pattern=dict(type='str'),
enabled=dict(type='bool'),
runlevel=dict(type='str', default='default'),
arguments=dict(type='str', default='', aliases=['args']),
),
supports_check_mode=True,
required_one_of=[['state', 'enabled']],
)
service = Service(module)
module.debug('Service instantiated - platform %s' % service.platform)
if service.distribution:
module.debug('Service instantiated - distribution %s' % service.distribution)
rc = 0
out = ''
err = ''
result = {}
result['name'] = service.name
# Find service management tools
service.get_service_tools()
# Enable/disable service startup at boot if requested
if service.module.params['enabled'] is not None:
# FIXME: ideally this should detect if we need to toggle the enablement state, though
# it's unlikely the changed handler would need to fire in this case so it's a minor thing.
service.service_enable()
result['enabled'] = service.enable
if module.params['state'] is None:
# Not changing the running state, so bail out now.
result['changed'] = service.changed
module.exit_json(**result)
result['state'] = service.state
# Collect service status
if service.pattern:
service.check_ps()
else:
service.get_service_status()
# Calculate if request will change service state
service.check_service_changed()
# Modify service state if necessary
(rc, out, err) = service.modify_service_state()
if rc != 0:
if err and "Job is already running" in err:
# upstart got confused, one such possibility is MySQL on Ubuntu 12.04
# where status may report it has no start/stop links and we could
# not get accurate status
pass
else:
if err:
module.fail_json(msg=err)
else:
module.fail_json(msg=out)
result['changed'] = service.changed | service.svc_change
if service.module.params['enabled'] is not None:
result['enabled'] = service.module.params['enabled']
if not service.module.params['state']:
status = service.get_service_status()
if status is None:
result['state'] = 'absent'
elif status is False:
result['state'] = 'started'
else:
result['state'] = 'stopped'
else:
# as we may have just bounced the service the service command may not
# report accurate state at this moment so just show what we ran
if service.module.params['state'] in ['reloaded', 'restarted', 'started']:
result['state'] = 'started'
else:
result['state'] = 'stopped'
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,670 |
Documentation for "skipped" plugin contains errors
|
### Summary
In the Examples section of this page https://docs.ansible.com/ansible/latest/collections/ansible/builtin/skipped_test.html
The given example contains a template error:
\# test 'status' to know how to respond
{{ (taskresults is skipped}}
It should actually be:
\# test 'status' to know how to respond
{{ (taskresults is skipped) }}
### Issue Type
Documentation Report
### Component Name
ansible/latest/collections/ansible/builtin/skipped_test.html
### Ansible Version
```console
N/A
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Additional Information
Providing a correct example is beneficial to users
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80670
|
https://github.com/ansible/ansible/pull/80671
|
9bd698b3a78ad9abc9d0b1775d8f67747a13b295
|
1568f07b220e70b8c62e844ccb2939da1cd9a90e
| 2023-04-28T13:04:52Z |
python
| 2023-04-28T19:33:02Z |
lib/ansible/plugins/test/skipped.yml
|
DOCUMENTATION:
name: skipped
author: Ansible Core
version_added: "1.9"
short_description: Was task skipped
aliases: [skip]
description:
- Tests if task was skipped
- This test checks for the existance of a C(skipped) key in the input dictionary and that it is C(True) if present
options:
_input:
description: registered result from an Ansible task
type: dictionary
required: True
EXAMPLES: |
# test 'status' to know how to respond
{{ (taskresults is skipped}}
RETURN:
_value:
description: Returns C(True) if the task was skipped, C(False) otherwise.
type: boolean
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,372 |
When using `-m 'venv'` in virtualenv_command, quoting 'venv' breaks it
|
### Summary
When using `ansible.builtin.pip`, if you wish to create a virtualenv using the `venv` module rather than one of the `virtualenv` wrapper script, the standard way to do so is `virtualenv_command: "FOO -m venv"`, where FOO is the absolute path to the Python interpreter you wish to use, e.g. `/usr/bin/python3.6`.
However, if you wrap the 'venv' module name in quotes, to protect it as a shell string literal (e.g. `virtualenv_command: "/usr/bin/python3.6 -m 'venv'"`), an error occurs.
Related: https://github.com/ansible/ansible/issues/52275
### Issue Type
Bug Report
### Component Name
ansible.builtin.pip
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = None
configured module search path = ['/home/votisupport/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/votisupport/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/votisupport/.ansible/collections:/usr/share/ansible/collections
executable location = /home/votisupport/test/venv/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
```yaml
- name: "Demonstrate Ansible bug"
hosts: "127.0.0.1"
remote_user: "root"
connection: "local"
become: true
become_user: "root"
gather_facts: false
tasks:
- name: "Update pip in test virtualenv"
ansible.builtin.pip:
name:
- "pip==21.1.2"
virtualenv: "/test_venv"
virtualenv_command: "/usr/bin/python3.6 -m 'venv'"
```
### Expected Results
* The value `/usr/bin/python3.6 -m venv` is treated as an executable path followed by 2 command-line arguments rather than a path which contains literal spaces in it
* One of the example values given in [the documentation for the 'virtualenv_command' parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html#parameter-virtualenv_command) is `~/bin/virtualenv`
* Therefore the value is being passed to and parsed by a shell
* Therefore:
a. `/usr/bin/python3.6 -m 'venv'` should be functionally identical to `/usr/bin/python3.6 -m venv`
b. Per shell scripting best practices, 'venv' should be quoted, even though it doesn't contain any whitespace or special characters, because it is a command-line argument which can take an arbitrary string values.
### Actual Results
```console
(venv) [REDACTED@REDACTED test]$ ansible-playbook -vvvv 'playbook.yml'
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]. This feature will be removed from ansible-core in version 2.12.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-playbook [core 2.11.6]
config file = None
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections
executable location = /home/REDACTED/test/venv/bin/ansible-playbook
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Demonstrate Ansible bug] ************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Update pip in test virtualenv] ******************************************************************************************************************************************************************************************************************************************
task path: /home/REDACTED/test/playbook.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: REDACTED
<127.0.0.1> EXEC /bin/sh -c 'echo ~REDACTED && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/REDACTED/.ansible/tmp `"&& mkdir "` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" && echo ansible-tmp-1637837031.3618085-76233-273600869189678="` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" ) && sleep 0'
Using module file /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/modules/pip.py
<127.0.0.1> PUT /home/REDACTED/.ansible/tmp/ansible-local-76226li2q81vb/tmptxkwmgu5 TO /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ofcnvczhymkuiiofkpnkkjeubhdkhtym ; /home/REDACTED/test/venv/bin/python3.6 /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ > /dev/null 2>&1 && sleep 0'
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"cmd": [
"/usr/bin/python3.6",
"-m",
"venv",
"-p/home/REDACTED/test/venv/bin/python3.6",
"/test_venv"
],
"invocation": {
"module_args": {
"chdir": null,
"editable": false,
"executable": null,
"extra_args": null,
"name": [
"pip==21.1.2"
],
"requirements": null,
"state": "present",
"umask": null,
"version": null,
"virtualenv": "/test_venv",
"virtualenv_command": "/usr/bin/python3.6 -m 'venv'",
"virtualenv_python": null,
"virtualenv_site_packages": false
}
},
"msg": "\n:stderr: usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear]\n [--upgrade] [--without-pip] [--prompt PROMPT]\n ENV_DIR [ENV_DIR ...]\nvenv: error: unrecognized arguments: -p/home/REDACTED/test/venv/bin/python3.6\n"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76372
|
https://github.com/ansible/ansible/pull/80624
|
251360314d0b385322d36001f3deb9820c3febc8
|
7f48fa01295e85f94437041688fb898e870c5154
| 2021-11-26T01:54:21Z |
python
| 2023-05-02T07:52:11Z |
changelogs/fragments/76372-fix-pip-virtualenv-command-parsing.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,372 |
When using `-m 'venv'` in virtualenv_command, quoting 'venv' breaks it
|
### Summary
When using `ansible.builtin.pip`, if you wish to create a virtualenv using the `venv` module rather than one of the `virtualenv` wrapper script, the standard way to do so is `virtualenv_command: "FOO -m venv"`, where FOO is the absolute path to the Python interpreter you wish to use, e.g. `/usr/bin/python3.6`.
However, if you wrap the 'venv' module name in quotes, to protect it as a shell string literal (e.g. `virtualenv_command: "/usr/bin/python3.6 -m 'venv'"`), an error occurs.
Related: https://github.com/ansible/ansible/issues/52275
### Issue Type
Bug Report
### Component Name
ansible.builtin.pip
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = None
configured module search path = ['/home/votisupport/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/votisupport/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/votisupport/.ansible/collections:/usr/share/ansible/collections
executable location = /home/votisupport/test/venv/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
```yaml
- name: "Demonstrate Ansible bug"
hosts: "127.0.0.1"
remote_user: "root"
connection: "local"
become: true
become_user: "root"
gather_facts: false
tasks:
- name: "Update pip in test virtualenv"
ansible.builtin.pip:
name:
- "pip==21.1.2"
virtualenv: "/test_venv"
virtualenv_command: "/usr/bin/python3.6 -m 'venv'"
```
### Expected Results
* The value `/usr/bin/python3.6 -m venv` is treated as an executable path followed by 2 command-line arguments rather than a path which contains literal spaces in it
* One of the example values given in [the documentation for the 'virtualenv_command' parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html#parameter-virtualenv_command) is `~/bin/virtualenv`
* Therefore the value is being passed to and parsed by a shell
* Therefore:
a. `/usr/bin/python3.6 -m 'venv'` should be functionally identical to `/usr/bin/python3.6 -m venv`
b. Per shell scripting best practices, 'venv' should be quoted, even though it doesn't contain any whitespace or special characters, because it is a command-line argument which can take an arbitrary string values.
### Actual Results
```console
(venv) [REDACTED@REDACTED test]$ ansible-playbook -vvvv 'playbook.yml'
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]. This feature will be removed from ansible-core in version 2.12.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-playbook [core 2.11.6]
config file = None
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections
executable location = /home/REDACTED/test/venv/bin/ansible-playbook
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Demonstrate Ansible bug] ************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Update pip in test virtualenv] ******************************************************************************************************************************************************************************************************************************************
task path: /home/REDACTED/test/playbook.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: REDACTED
<127.0.0.1> EXEC /bin/sh -c 'echo ~REDACTED && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/REDACTED/.ansible/tmp `"&& mkdir "` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" && echo ansible-tmp-1637837031.3618085-76233-273600869189678="` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" ) && sleep 0'
Using module file /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/modules/pip.py
<127.0.0.1> PUT /home/REDACTED/.ansible/tmp/ansible-local-76226li2q81vb/tmptxkwmgu5 TO /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ofcnvczhymkuiiofkpnkkjeubhdkhtym ; /home/REDACTED/test/venv/bin/python3.6 /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ > /dev/null 2>&1 && sleep 0'
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"cmd": [
"/usr/bin/python3.6",
"-m",
"venv",
"-p/home/REDACTED/test/venv/bin/python3.6",
"/test_venv"
],
"invocation": {
"module_args": {
"chdir": null,
"editable": false,
"executable": null,
"extra_args": null,
"name": [
"pip==21.1.2"
],
"requirements": null,
"state": "present",
"umask": null,
"version": null,
"virtualenv": "/test_venv",
"virtualenv_command": "/usr/bin/python3.6 -m 'venv'",
"virtualenv_python": null,
"virtualenv_site_packages": false
}
},
"msg": "\n:stderr: usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear]\n [--upgrade] [--without-pip] [--prompt PROMPT]\n ENV_DIR [ENV_DIR ...]\nvenv: error: unrecognized arguments: -p/home/REDACTED/test/venv/bin/python3.6\n"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76372
|
https://github.com/ansible/ansible/pull/80624
|
251360314d0b385322d36001f3deb9820c3febc8
|
7f48fa01295e85f94437041688fb898e870c5154
| 2021-11-26T01:54:21Z |
python
| 2023-05-02T07:52:11Z |
lib/ansible/modules/pip.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Matt Wright <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: pip
short_description: Manages Python library dependencies
description:
- "Manage Python library dependencies. To use this module, one of the following keys is required: C(name)
or C(requirements)."
version_added: "0.7"
options:
name:
description:
- The name of a Python library to install or the url(bzr+,hg+,git+,svn+) of the remote package.
- This can be a list (since 2.2) and contain version specifiers (since 2.7).
type: list
elements: str
version:
description:
- The version number to install of the Python library specified in the I(name) parameter.
type: str
requirements:
description:
- The path to a pip requirements file, which should be local to the remote system.
File can be specified as a relative path if using the chdir option.
type: str
virtualenv:
description:
- An optional path to a I(virtualenv) directory to install into.
It cannot be specified together with the 'executable' parameter
(added in 2.1).
If the virtualenv does not exist, it will be created before installing
packages. The optional virtualenv_site_packages, virtualenv_command,
and virtualenv_python options affect the creation of the virtualenv.
type: path
virtualenv_site_packages:
description:
- Whether the virtual environment will inherit packages from the
global site-packages directory. Note that if this setting is
changed on an already existing virtual environment it will not
have any effect, the environment must be deleted and newly
created.
type: bool
default: "no"
version_added: "1.0"
virtualenv_command:
description:
- The command or a pathname to the command to create the virtual
environment with. For example C(pyvenv), C(virtualenv),
C(virtualenv2), C(~/bin/virtualenv), C(/usr/local/bin/virtualenv).
type: path
default: virtualenv
version_added: "1.1"
virtualenv_python:
description:
- The Python executable used for creating the virtual environment.
For example C(python3.5), C(python2.7). When not specified, the
Python version used to run the ansible module is used. This parameter
should not be used when C(virtualenv_command) is using C(pyvenv) or
the C(-m venv) module.
type: str
version_added: "2.0"
state:
description:
- The state of module
- The 'forcereinstall' option is only available in Ansible 2.1 and above.
type: str
choices: [ absent, forcereinstall, latest, present ]
default: present
extra_args:
description:
- Extra arguments passed to pip.
type: str
version_added: "1.0"
editable:
description:
- Pass the editable flag.
type: bool
default: 'no'
version_added: "2.0"
chdir:
description:
- cd into this directory before running the command
type: path
version_added: "1.3"
executable:
description:
- The explicit executable or pathname for the pip executable,
if different from the Ansible Python interpreter. For
example C(pip3.3), if there are both Python 2.7 and 3.3 installations
in the system and you want to run pip for the Python 3.3 installation.
- Mutually exclusive with I(virtualenv) (added in 2.1).
- Does not affect the Ansible Python interpreter.
- The setuptools package must be installed for both the Ansible Python interpreter
and for the version of Python specified by this option.
type: path
version_added: "1.3"
umask:
description:
- The system umask to apply before installing the pip package. This is
useful, for example, when installing on systems that have a very
restrictive umask by default (e.g., "0077") and you want to pip install
packages which are to be used by all users. Note that this requires you
to specify desired umask mode as an octal string, (e.g., "0022").
type: str
version_added: "2.1"
extends_documentation_fragment:
- action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- The virtualenv (U(http://www.virtualenv.org/)) must be
installed on the remote host if the virtualenv parameter is specified and
the virtualenv needs to be created.
- Although it executes using the Ansible Python interpreter, the pip module shells out to
run the actual pip command, so it can use any pip version you specify with I(executable).
By default, it uses the pip version for the Ansible Python interpreter. For example, pip3 on python 3, and pip2 or pip on python 2.
- The interpreter used by Ansible
(see R(ansible_python_interpreter, ansible_python_interpreter))
requires the setuptools package, regardless of the version of pip set with
the I(executable) option.
requirements:
- pip
- virtualenv
- setuptools
author:
- Matt Wright (@mattupstate)
'''
EXAMPLES = '''
- name: Install bottle python package
ansible.builtin.pip:
name: bottle
- name: Install bottle python package on version 0.11
ansible.builtin.pip:
name: bottle==0.11
- name: Install bottle python package with version specifiers
ansible.builtin.pip:
name: bottle>0.10,<0.20,!=0.11
- name: Install multi python packages with version specifiers
ansible.builtin.pip:
name:
- django>1.11.0,<1.12.0
- bottle>0.10,<0.20,!=0.11
- name: Install python package using a proxy
ansible.builtin.pip:
name: six
environment:
http_proxy: 'http://127.0.0.1:8080'
https_proxy: 'https://127.0.0.1:8080'
# You do not have to supply '-e' option in extra_args
- name: Install MyApp using one of the remote protocols (bzr+,hg+,git+,svn+)
ansible.builtin.pip:
name: svn+http://myrepo/svn/MyApp#egg=MyApp
- name: Install MyApp using one of the remote protocols (bzr+,hg+,git+)
ansible.builtin.pip:
name: git+http://myrepo/app/MyApp
- name: Install MyApp from local tarball
ansible.builtin.pip:
name: file:///path/to/MyApp.tar.gz
- name: Install bottle into the specified (virtualenv), inheriting none of the globally installed modules
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
- name: Install bottle into the specified (virtualenv), inheriting globally installed modules
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
virtualenv_site_packages: yes
- name: Install bottle into the specified (virtualenv), using Python 2.7
ansible.builtin.pip:
name: bottle
virtualenv: /my_app/venv
virtualenv_command: virtualenv-2.7
- name: Install bottle within a user home directory
ansible.builtin.pip:
name: bottle
extra_args: --user
- name: Install specified python requirements
ansible.builtin.pip:
requirements: /my_app/requirements.txt
- name: Install specified python requirements in indicated (virtualenv)
ansible.builtin.pip:
requirements: /my_app/requirements.txt
virtualenv: /my_app/venv
- name: Install specified python requirements and custom Index URL
ansible.builtin.pip:
requirements: /my_app/requirements.txt
extra_args: -i https://example.com/pypi/simple
- name: Install specified python requirements offline from a local directory with downloaded packages
ansible.builtin.pip:
requirements: /my_app/requirements.txt
extra_args: "--no-index --find-links=file:///my_downloaded_packages_dir"
- name: Install bottle for Python 3.3 specifically, using the 'pip3.3' executable
ansible.builtin.pip:
name: bottle
executable: pip3.3
- name: Install bottle, forcing reinstallation if it's already installed
ansible.builtin.pip:
name: bottle
state: forcereinstall
- name: Install bottle while ensuring the umask is 0022 (to ensure other users can use it)
ansible.builtin.pip:
name: bottle
umask: "0022"
become: True
'''
RETURN = '''
cmd:
description: pip command used by the module
returned: success
type: str
sample: pip2 install ansible six
name:
description: list of python modules targeted by pip
returned: success
type: list
sample: ['ansible', 'six']
requirements:
description: Path to the requirements file
returned: success, if a requirements file was provided
type: str
sample: "/srv/git/project/requirements.txt"
version:
description: Version of the package specified in 'name'
returned: success, if a name and version were provided
type: str
sample: "2.5.1"
virtualenv:
description: Path to the virtualenv
returned: success, if a virtualenv path was provided
type: str
sample: "/tmp/virtualenv"
'''
import os
import re
import sys
import tempfile
import operator
import shlex
import traceback
from ansible.module_utils.compat.version import LooseVersion
SETUPTOOLS_IMP_ERR = None
try:
from pkg_resources import Requirement
HAS_SETUPTOOLS = True
except ImportError:
HAS_SETUPTOOLS = False
SETUPTOOLS_IMP_ERR = traceback.format_exc()
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule, is_executable, missing_required_lib
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
#: Python one-liners to be run at the command line that will determine the
# installed version for these special libraries. These are libraries that
# don't end up in the output of pip freeze.
_SPECIAL_PACKAGE_CHECKERS = {'setuptools': 'import setuptools; print(setuptools.__version__)',
'pip': 'import pkg_resources; print(pkg_resources.get_distribution("pip").version)'}
_VCS_RE = re.compile(r'(svn|git|hg|bzr)\+')
op_dict = {">=": operator.ge, "<=": operator.le, ">": operator.gt,
"<": operator.lt, "==": operator.eq, "!=": operator.ne, "~=": operator.ge}
def _is_vcs_url(name):
"""Test whether a name is a vcs url or not."""
return re.match(_VCS_RE, name)
def _is_package_name(name):
"""Test whether the name is a package name or a version specifier."""
return not name.lstrip().startswith(tuple(op_dict.keys()))
def _recover_package_name(names):
"""Recover package names as list from user's raw input.
:input: a mixed and invalid list of names or version specifiers
:return: a list of valid package name
eg.
input: ['django>1.11.1', '<1.11.3', 'ipaddress', 'simpleproject>1.1.0', '<2.0.0']
return: ['django>1.11.1,<1.11.3', 'ipaddress', 'simpleproject>1.1.0,<2.0.0']
input: ['django>1.11.1,<1.11.3,ipaddress', 'simpleproject>1.1.0,<2.0.0']
return: ['django>1.11.1,<1.11.3', 'ipaddress', 'simpleproject>1.1.0,<2.0.0']
"""
# rebuild input name to a flat list so we can tolerate any combination of input
tmp = []
for one_line in names:
tmp.extend(one_line.split(","))
names = tmp
# reconstruct the names
name_parts = []
package_names = []
in_brackets = False
for name in names:
if _is_package_name(name) and not in_brackets:
if name_parts:
package_names.append(",".join(name_parts))
name_parts = []
if "[" in name:
in_brackets = True
if in_brackets and "]" in name:
in_brackets = False
name_parts.append(name)
package_names.append(",".join(name_parts))
return package_names
def _get_cmd_options(module, cmd):
thiscmd = cmd + " --help"
rc, stdout, stderr = module.run_command(thiscmd)
if rc != 0:
module.fail_json(msg="Could not get output from %s: %s" % (thiscmd, stdout + stderr))
words = stdout.strip().split()
cmd_options = [x for x in words if x.startswith('--')]
return cmd_options
def _get_packages(module, pip, chdir):
'''Return results of pip command to get packages.'''
# Try 'pip list' command first.
command = pip + ['list', '--format=freeze']
locale = get_best_parsable_locale(module)
lang_env = {'LANG': locale, 'LC_ALL': locale, 'LC_MESSAGES': locale}
rc, out, err = module.run_command(command, cwd=chdir, environ_update=lang_env)
# If there was an error (pip version too old) then use 'pip freeze'.
if rc != 0:
command = pip + ['freeze']
rc, out, err = module.run_command(command, cwd=chdir)
if rc != 0:
_fail(module, command, out, err)
return ' '.join(command), out, err
def _is_present(module, req, installed_pkgs, pkg_command):
'''Return whether or not package is installed.'''
for pkg in installed_pkgs:
if '==' in pkg:
pkg_name, pkg_version = pkg.split('==')
pkg_name = Package.canonicalize_name(pkg_name)
else:
continue
if pkg_name == req.package_name and req.is_satisfied_by(pkg_version):
return True
return False
def _get_pip(module, env=None, executable=None):
# Older pip only installed under the "/usr/bin/pip" name. Many Linux
# distros install it there.
# By default, we try to use pip required for the current python
# interpreter, so people can use pip to install modules dependencies
candidate_pip_basenames = ('pip2', 'pip')
if PY3:
# pip under python3 installs the "/usr/bin/pip3" name
candidate_pip_basenames = ('pip3',)
pip = None
if executable is not None:
if os.path.isabs(executable):
pip = executable
else:
# If you define your own executable that executable should be the only candidate.
# As noted in the docs, executable doesn't work with virtualenvs.
candidate_pip_basenames = (executable,)
elif executable is None and env is None and _have_pip_module():
# If no executable or virtualenv were specified, use the pip module for the current Python interpreter if available.
# Use of `__main__` is required to support Python 2.6 since support for executing packages with `runpy` was added in Python 2.7.
# Without it Python 2.6 gives the following error: pip is a package and cannot be directly executed
pip = [sys.executable, '-m', 'pip.__main__']
if pip is None:
if env is None:
opt_dirs = []
for basename in candidate_pip_basenames:
pip = module.get_bin_path(basename, False, opt_dirs)
if pip is not None:
break
else:
# For-else: Means that we did not break out of the loop
# (therefore, that pip was not found)
module.fail_json(msg='Unable to find any of %s to use. pip'
' needs to be installed.' % ', '.join(candidate_pip_basenames))
else:
# If we're using a virtualenv we must use the pip from the
# virtualenv
venv_dir = os.path.join(env, 'bin')
candidate_pip_basenames = (candidate_pip_basenames[0], 'pip')
for basename in candidate_pip_basenames:
candidate = os.path.join(venv_dir, basename)
if os.path.exists(candidate) and is_executable(candidate):
pip = candidate
break
else:
# For-else: Means that we did not break out of the loop
# (therefore, that pip was not found)
module.fail_json(msg='Unable to find pip in the virtualenv, %s, ' % env +
'under any of these names: %s. ' % (', '.join(candidate_pip_basenames)) +
'Make sure pip is present in the virtualenv.')
if not isinstance(pip, list):
pip = [pip]
return pip
def _have_pip_module(): # type: () -> bool
"""Return True if the `pip` module can be found using the current Python interpreter, otherwise return False."""
try:
from importlib.util import find_spec
except ImportError:
find_spec = None # type: ignore[assignment] # type: ignore[no-redef]
if find_spec: # type: ignore[truthy-function]
# noinspection PyBroadException
try:
# noinspection PyUnresolvedReferences
found = bool(find_spec('pip'))
except Exception:
found = False
else:
# noinspection PyDeprecation
import imp
# noinspection PyBroadException
try:
# noinspection PyDeprecation
imp.find_module('pip')
except Exception:
found = False
else:
found = True
return found
def _fail(module, cmd, out, err):
msg = ''
if out:
msg += "stdout: %s" % (out, )
if err:
msg += "\n:stderr: %s" % (err, )
module.fail_json(cmd=cmd, msg=msg)
def _get_package_info(module, package, env=None):
"""This is only needed for special packages which do not show up in pip freeze
pip and setuptools fall into this category.
:returns: a string containing the version number if the package is
installed. None if the package is not installed.
"""
if env:
opt_dirs = ['%s/bin' % env]
else:
opt_dirs = []
python_bin = module.get_bin_path('python', False, opt_dirs)
if python_bin is None:
formatted_dep = None
else:
rc, out, err = module.run_command([python_bin, '-c', _SPECIAL_PACKAGE_CHECKERS[package]])
if rc:
formatted_dep = None
else:
formatted_dep = '%s==%s' % (package, out.strip())
return formatted_dep
def setup_virtualenv(module, env, chdir, out, err):
if module.check_mode:
module.exit_json(changed=True)
cmd = shlex.split(module.params['virtualenv_command'])
# Find the binary for the command in the PATH
# and switch the command for the explicit path.
if os.path.basename(cmd[0]) == cmd[0]:
cmd[0] = module.get_bin_path(cmd[0], True)
# Add the system-site-packages option if that
# is enabled, otherwise explicitly set the option
# to not use system-site-packages if that is an
# option provided by the command's help function.
if module.params['virtualenv_site_packages']:
cmd.append('--system-site-packages')
else:
cmd_opts = _get_cmd_options(module, cmd[0])
if '--no-site-packages' in cmd_opts:
cmd.append('--no-site-packages')
virtualenv_python = module.params['virtualenv_python']
# -p is a virtualenv option, not compatible with pyenv or venv
# this conditional validates if the command being used is not any of them
if not any(ex in module.params['virtualenv_command'] for ex in ('pyvenv', '-m venv')):
if virtualenv_python:
cmd.append('-p%s' % virtualenv_python)
elif PY3:
# Ubuntu currently has a patch making virtualenv always
# try to use python2. Since Ubuntu16 works without
# python2 installed, this is a problem. This code mimics
# the upstream behaviour of using the python which invoked
# virtualenv to determine which python is used inside of
# the virtualenv (when none are specified).
cmd.append('-p%s' % sys.executable)
# if venv or pyvenv are used and virtualenv_python is defined, then
# virtualenv_python is ignored, this has to be acknowledged
elif module.params['virtualenv_python']:
module.fail_json(
msg='virtualenv_python should not be used when'
' using the venv module or pyvenv as virtualenv_command'
)
cmd.append(env)
rc, out_venv, err_venv = module.run_command(cmd, cwd=chdir)
out += out_venv
err += err_venv
if rc != 0:
_fail(module, cmd, out, err)
return out, err
class Package:
"""Python distribution package metadata wrapper.
A wrapper class for Requirement, which provides
API to parse package name, version specifier,
test whether a package is already satisfied.
"""
_CANONICALIZE_RE = re.compile(r'[-_.]+')
def __init__(self, name_string, version_string=None):
self._plain_package = False
self.package_name = name_string
self._requirement = None
if version_string:
version_string = version_string.lstrip()
separator = '==' if version_string[0].isdigit() else ' '
name_string = separator.join((name_string, version_string))
try:
self._requirement = Requirement.parse(name_string)
# old pkg_resource will replace 'setuptools' with 'distribute' when it's already installed
if self._requirement.project_name == "distribute" and "setuptools" in name_string:
self.package_name = "setuptools"
self._requirement.project_name = "setuptools"
else:
self.package_name = Package.canonicalize_name(self._requirement.project_name)
self._plain_package = True
except ValueError as e:
pass
@property
def has_version_specifier(self):
if self._plain_package:
return bool(self._requirement.specs)
return False
def is_satisfied_by(self, version_to_test):
if not self._plain_package:
return False
try:
return self._requirement.specifier.contains(version_to_test, prereleases=True)
except AttributeError:
# old setuptools has no specifier, do fallback
version_to_test = LooseVersion(version_to_test)
return all(
op_dict[op](version_to_test, LooseVersion(ver))
for op, ver in self._requirement.specs
)
@staticmethod
def canonicalize_name(name):
# This is taken from PEP 503.
return Package._CANONICALIZE_RE.sub("-", name).lower()
def __str__(self):
if self._plain_package:
return to_native(self._requirement)
return self.package_name
def main():
state_map = dict(
present=['install'],
absent=['uninstall', '-y'],
latest=['install', '-U'],
forcereinstall=['install', '-U', '--force-reinstall'],
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=list(state_map.keys())),
name=dict(type='list', elements='str'),
version=dict(type='str'),
requirements=dict(type='str'),
virtualenv=dict(type='path'),
virtualenv_site_packages=dict(type='bool', default=False),
virtualenv_command=dict(type='path', default='virtualenv'),
virtualenv_python=dict(type='str'),
extra_args=dict(type='str'),
editable=dict(type='bool', default=False),
chdir=dict(type='path'),
executable=dict(type='path'),
umask=dict(type='str'),
),
required_one_of=[['name', 'requirements']],
mutually_exclusive=[['name', 'requirements'], ['executable', 'virtualenv']],
supports_check_mode=True,
)
if not HAS_SETUPTOOLS:
module.fail_json(msg=missing_required_lib("setuptools"),
exception=SETUPTOOLS_IMP_ERR)
state = module.params['state']
name = module.params['name']
version = module.params['version']
requirements = module.params['requirements']
extra_args = module.params['extra_args']
chdir = module.params['chdir']
umask = module.params['umask']
env = module.params['virtualenv']
venv_created = False
if env and chdir:
env = os.path.join(chdir, env)
if umask and not isinstance(umask, int):
try:
umask = int(umask, 8)
except Exception:
module.fail_json(msg="umask must be an octal integer",
details=to_native(sys.exc_info()[1]))
old_umask = None
if umask is not None:
old_umask = os.umask(umask)
try:
if state == 'latest' and version is not None:
module.fail_json(msg='version is incompatible with state=latest')
if chdir is None:
# this is done to avoid permissions issues with privilege escalation and virtualenvs
chdir = tempfile.gettempdir()
err = ''
out = ''
if env:
if not os.path.exists(os.path.join(env, 'bin', 'activate')):
venv_created = True
out, err = setup_virtualenv(module, env, chdir, out, err)
pip = _get_pip(module, env, module.params['executable'])
cmd = pip + state_map[state]
# If there's a virtualenv we want things we install to be able to use other
# installations that exist as binaries within this virtualenv. Example: we
# install cython and then gevent -- gevent needs to use the cython binary,
# not just a python package that will be found by calling the right python.
# So if there's a virtualenv, we add that bin/ to the beginning of the PATH
# in run_command by setting path_prefix here.
path_prefix = None
if env:
path_prefix = os.path.join(env, 'bin')
# Automatically apply -e option to extra_args when source is a VCS url. VCS
# includes those beginning with svn+, git+, hg+ or bzr+
has_vcs = False
if name:
for pkg in name:
if pkg and _is_vcs_url(pkg):
has_vcs = True
break
# convert raw input package names to Package instances
packages = [Package(pkg) for pkg in _recover_package_name(name)]
# check invalid combination of arguments
if version is not None:
if len(packages) > 1:
module.fail_json(
msg="'version' argument is ambiguous when installing multiple package distributions. "
"Please specify version restrictions next to each package in 'name' argument."
)
if packages[0].has_version_specifier:
module.fail_json(
msg="The 'version' argument conflicts with any version specifier provided along with a package name. "
"Please keep the version specifier, but remove the 'version' argument."
)
# if the version specifier is provided by version, append that into the package
packages[0] = Package(to_native(packages[0]), version)
if module.params['editable']:
args_list = [] # used if extra_args is not used at all
if extra_args:
args_list = extra_args.split(' ')
if '-e' not in args_list:
args_list.append('-e')
# Ok, we will reconstruct the option string
extra_args = ' '.join(args_list)
if extra_args:
cmd.extend(shlex.split(extra_args))
if name:
cmd.extend(to_native(p) for p in packages)
elif requirements:
cmd.extend(['-r', requirements])
else:
module.exit_json(
changed=False,
warnings=["No valid name or requirements file found."],
)
if module.check_mode:
if extra_args or requirements or state == 'latest' or not name:
module.exit_json(changed=True)
pkg_cmd, out_pip, err_pip = _get_packages(module, pip, chdir)
out += out_pip
err += err_pip
changed = False
if name:
pkg_list = [p for p in out.split('\n') if not p.startswith('You are using') and not p.startswith('You should consider') and p]
if pkg_cmd.endswith(' freeze') and ('pip' in name or 'setuptools' in name):
# Older versions of pip (pre-1.3) do not have pip list.
# pip freeze does not list setuptools or pip in its output
# So we need to get those via a specialcase
for pkg in ('setuptools', 'pip'):
if pkg in name:
formatted_dep = _get_package_info(module, pkg, env)
if formatted_dep is not None:
pkg_list.append(formatted_dep)
out += '%s\n' % formatted_dep
for package in packages:
is_present = _is_present(module, package, pkg_list, pkg_cmd)
if (state == 'present' and not is_present) or (state == 'absent' and is_present):
changed = True
break
module.exit_json(changed=changed, cmd=pkg_cmd, stdout=out, stderr=err)
out_freeze_before = None
if requirements or has_vcs:
_, out_freeze_before, _ = _get_packages(module, pip, chdir)
rc, out_pip, err_pip = module.run_command(cmd, path_prefix=path_prefix, cwd=chdir)
out += out_pip
err += err_pip
if rc == 1 and state == 'absent' and \
('not installed' in out_pip or 'not installed' in err_pip):
pass # rc is 1 when attempting to uninstall non-installed package
elif rc != 0:
_fail(module, cmd, out, err)
if state == 'absent':
changed = 'Successfully uninstalled' in out_pip
else:
if out_freeze_before is None:
changed = 'Successfully installed' in out_pip
else:
_, out_freeze_after, _ = _get_packages(module, pip, chdir)
changed = out_freeze_before != out_freeze_after
changed = changed or venv_created
module.exit_json(changed=changed, cmd=cmd, name=name, version=version,
state=state, requirements=requirements, virtualenv=env,
stdout=out, stderr=err)
finally:
if old_umask is not None:
os.umask(old_umask)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,372 |
When using `-m 'venv'` in virtualenv_command, quoting 'venv' breaks it
|
### Summary
When using `ansible.builtin.pip`, if you wish to create a virtualenv using the `venv` module rather than one of the `virtualenv` wrapper script, the standard way to do so is `virtualenv_command: "FOO -m venv"`, where FOO is the absolute path to the Python interpreter you wish to use, e.g. `/usr/bin/python3.6`.
However, if you wrap the 'venv' module name in quotes, to protect it as a shell string literal (e.g. `virtualenv_command: "/usr/bin/python3.6 -m 'venv'"`), an error occurs.
Related: https://github.com/ansible/ansible/issues/52275
### Issue Type
Bug Report
### Component Name
ansible.builtin.pip
### Ansible Version
```console
$ ansible --version
ansible [core 2.11.6]
config file = None
configured module search path = ['/home/votisupport/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/votisupport/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/votisupport/.ansible/collections:/usr/share/ansible/collections
executable location = /home/votisupport/test/venv/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
CentOS Linux release 7.9.2009 (Core)
### Steps to Reproduce
```yaml
- name: "Demonstrate Ansible bug"
hosts: "127.0.0.1"
remote_user: "root"
connection: "local"
become: true
become_user: "root"
gather_facts: false
tasks:
- name: "Update pip in test virtualenv"
ansible.builtin.pip:
name:
- "pip==21.1.2"
virtualenv: "/test_venv"
virtualenv_command: "/usr/bin/python3.6 -m 'venv'"
```
### Expected Results
* The value `/usr/bin/python3.6 -m venv` is treated as an executable path followed by 2 command-line arguments rather than a path which contains literal spaces in it
* One of the example values given in [the documentation for the 'virtualenv_command' parameter](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html#parameter-virtualenv_command) is `~/bin/virtualenv`
* Therefore the value is being passed to and parsed by a shell
* Therefore:
a. `/usr/bin/python3.6 -m 'venv'` should be functionally identical to `/usr/bin/python3.6 -m venv`
b. Per shell scripting best practices, 'venv' should be quoted, even though it doesn't contain any whitespace or special characters, because it is a command-line argument which can take an arbitrary string values.
### Actual Results
```console
(venv) [REDACTED@REDACTED test]$ ansible-playbook -vvvv 'playbook.yml'
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]. This feature will be removed from ansible-core in version 2.12.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible-playbook [core 2.11.6]
config file = None
configured module search path = ['/home/REDACTED/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible
ansible collection location = /home/REDACTED/.ansible/collections:/usr/share/ansible/collections
executable location = /home/REDACTED/test/venv/bin/ansible-playbook
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
jinja version = 3.0.3
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml ********************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [Demonstrate Ansible bug] ************************************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Update pip in test virtualenv] ******************************************************************************************************************************************************************************************************************************************
task path: /home/REDACTED/test/playbook.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: REDACTED
<127.0.0.1> EXEC /bin/sh -c 'echo ~REDACTED && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/REDACTED/.ansible/tmp `"&& mkdir "` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" && echo ansible-tmp-1637837031.3618085-76233-273600869189678="` echo /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678 `" ) && sleep 0'
Using module file /home/REDACTED/test/venv/lib64/python3.6/site-packages/ansible/modules/pip.py
<127.0.0.1> PUT /home/REDACTED/.ansible/tmp/ansible-local-76226li2q81vb/tmptxkwmgu5 TO /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ofcnvczhymkuiiofkpnkkjeubhdkhtym ; /home/REDACTED/test/venv/bin/python3.6 /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/AnsiballZ_pip.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/REDACTED/.ansible/tmp/ansible-tmp-1637837031.3618085-76233-273600869189678/ > /dev/null 2>&1 && sleep 0'
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"cmd": [
"/usr/bin/python3.6",
"-m",
"venv",
"-p/home/REDACTED/test/venv/bin/python3.6",
"/test_venv"
],
"invocation": {
"module_args": {
"chdir": null,
"editable": false,
"executable": null,
"extra_args": null,
"name": [
"pip==21.1.2"
],
"requirements": null,
"state": "present",
"umask": null,
"version": null,
"virtualenv": "/test_venv",
"virtualenv_command": "/usr/bin/python3.6 -m 'venv'",
"virtualenv_python": null,
"virtualenv_site_packages": false
}
},
"msg": "\n:stderr: usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear]\n [--upgrade] [--without-pip] [--prompt PROMPT]\n ENV_DIR [ENV_DIR ...]\nvenv: error: unrecognized arguments: -p/home/REDACTED/test/venv/bin/python3.6\n"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76372
|
https://github.com/ansible/ansible/pull/80624
|
251360314d0b385322d36001f3deb9820c3febc8
|
7f48fa01295e85f94437041688fb898e870c5154
| 2021-11-26T01:54:21Z |
python
| 2023-05-02T07:52:11Z |
test/integration/targets/pip/tasks/pip.yml
|
# test code for the pip module
# (c) 2014, Michael DeHaan <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# FIXME: replace the python test package
# first some tests installed system-wide
# verify things were not installed to start with
- name: ensure packages are not installed (precondition setup)
pip:
name: "{{ pip_test_packages }}"
state: absent
# verify that a package that is uninstalled being set to absent
# results in an unchanged state and that the test package is not
# installed
- name: ensure packages are not installed
pip:
name: "{{ pip_test_packages }}"
state: absent
register: uninstall_result
- name: removing unremoved packages should return unchanged
assert:
that:
- "not (uninstall_result is changed)"
- command: "{{ ansible_python.executable }} -c 'import {{ item }}'"
register: absent_result
failed_when: "absent_result.rc == 0"
loop: '{{ pip_test_modules }}'
# now we're going to install the test package knowing it is uninstalled
# and check that installation was ok
- name: ensure packages are installed
pip:
name: "{{ pip_test_packages }}"
state: present
register: install_result
- name: verify we recorded a change
assert:
that:
- "install_result is changed"
- command: "{{ ansible_python.executable }} -c 'import {{ item }}'"
loop: '{{ pip_test_modules }}'
# now remove it to test uninstallation of a package we are sure is installed
- name: now uninstall so we can see that a change occurred
pip:
name: "{{ pip_test_packages }}"
state: absent
register: absent2
- name: assert a change occurred on uninstallation
assert:
that:
- "absent2 is changed"
# put the test packages back
- name: now put it back in case someone wanted it (like us!)
pip:
name: "{{ pip_test_packages }}"
state: present
# Test virtualenv installations
- name: "make sure the test env doesn't exist"
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
- name: create a requirement file with an vcs url
copy:
dest: "{{ remote_tmp_dir }}/pipreq.txt"
content: "-e git+https://github.com/dvarrazzo/pyiso8601#egg=iso8601"
- name: install the requirement file in a virtualenv
pip:
requirements: "{{ remote_tmp_dir}}/pipreq.txt"
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: req_installed
- name: check that a change occurred
assert:
that:
- "req_installed is changed"
- name: "repeat installation to check status didn't change"
pip:
requirements: "{{ remote_tmp_dir}}/pipreq.txt"
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: req_installed
- name: "check that a change didn't occurr this time (bug ansible#1705)"
assert:
that:
- "not (req_installed is changed)"
- name: install the same module from url
pip:
name: "git+https://github.com/dvarrazzo/pyiso8601#egg=iso8601"
virtualenv: "{{ remote_tmp_dir }}/pipenv"
editable: True
register: url_installed
- name: "check that a change didn't occurr (bug ansible-modules-core#1645)"
assert:
that:
- "not (url_installed is changed)"
# Test pip package in check mode doesn't always report changed.
# Special case for pip
- name: check for pip package
pip:
name: pip
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
- name: check for pip package in check_mode
pip:
name: pip
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
check_mode: True
register: pip_check_mode
- name: make sure pip in check_mode doesn't report changed
assert:
that:
- "not (pip_check_mode is changed)"
# Special case for setuptools
- name: check for setuptools package
pip:
name: setuptools
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
- name: check for setuptools package in check_mode
pip:
name: setuptools
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
check_mode: True
register: setuptools_check_mode
- name: make sure setuptools in check_mode doesn't report changed
assert:
that:
- "not (setuptools_check_mode is changed)"
# Normal case
- name: check for q package
pip:
name: q
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
- name: check for q package in check_mode
pip:
name: q
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
check_mode: True
register: q_check_mode
- name: make sure q in check_mode doesn't report changed
assert:
that:
- "not (q_check_mode is changed)"
# Case with package name that has a different package name case and an
# underscore instead of a hyphen
- name: check for Junit-XML package
pip:
name: Junit-XML
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
- name: check for Junit-XML package in check_mode
pip:
name: Junit-XML
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
check_mode: True
register: diff_case_check_mode
- name: make sure Junit-XML in check_mode doesn't report changed
assert:
that:
- "diff_case_check_mode is not changed"
# ansible#23204
- name: ensure is a fresh virtualenv
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
- name: install pip throught pip into fresh virtualenv
pip:
name: pip
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: pip_install_venv
- name: make sure pip in fresh virtualenv report changed
assert:
that:
- "pip_install_venv is changed"
# https://github.com/ansible/ansible/issues/37912
# support chdir without virtualenv
- name: create chdir test directories
file:
state: directory
name: "{{ remote_tmp_dir }}/{{ item }}"
loop:
- pip_module
- pip_root
- pip_module/ansible_test_pip_chdir
- name: copy test module
copy:
src: "{{ item }}"
dest: "{{ remote_tmp_dir }}/pip_module/{{ item }}"
loop:
- setup.py
- ansible_test_pip_chdir/__init__.py
- name: install test module
pip:
name: .
chdir: "{{ remote_tmp_dir }}/pip_module"
extra_args: --user --upgrade --root {{ remote_tmp_dir }}/pip_root
- name: register python_site_lib
command: '{{ ansible_python.executable }} -c "import site; print(site.USER_SITE)"'
register: pip_python_site_lib
- name: register python_user_base
command: '{{ ansible_python.executable }} -c "import site; print(site.USER_BASE)"'
register: pip_python_user_base
- name: run test module
shell: "PYTHONPATH=$(echo {{ remote_tmp_dir }}/pip_root{{ pip_python_site_lib.stdout }}) {{ remote_tmp_dir }}/pip_root{{ pip_python_user_base.stdout }}/bin/ansible_test_pip_chdir"
register: pip_chdir_command
- name: make sure command ran
assert:
that:
- pip_chdir_command.stdout == "success"
# https://github.com/ansible/ansible/issues/25122
- name: ensure is a fresh virtualenv
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
- name: install requirements file into virtual + chdir
pip:
name: q
chdir: "{{ remote_tmp_dir }}/"
virtualenv: "pipenv"
state: present
register: venv_chdir
- name: make sure fresh virtualenv + chdir report changed
assert:
that:
- "venv_chdir is changed"
# ansible#38785
- name: allow empty list of packages
pip:
name: []
register: pip_install_empty
- name: ensure empty install is successful
assert:
that:
- "not (pip_install_empty is changed)"
# https://github.com/ansible/ansible/issues/41043
- block:
- name: Ensure previous virtualenv no longer exists
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
- name: do not consider an empty string as a version
pip:
name: q
state: present
version: ""
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: pip_empty_version_string
- name: test idempotency with empty string
pip:
name: q
state: present
version: ""
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: pip_empty_version_string_idempotency
- name: test idempotency without empty string
pip:
name: q
state: present
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: pip_no_empty_version_string_idempotency
# 'present' and version=="" is analogous to latest when first installed
- name: ensure we installed the latest version
pip:
name: q
state: latest
virtualenv: "{{ remote_tmp_dir }}/pipenv"
register: pip_empty_version_idempotency
- name: ensure that installation worked and is idempotent
assert:
that:
- pip_empty_version_string is changed
- pip_empty_version_string is successful
- pip_empty_version_idempotency is not changed
- pip_no_empty_version_string_idempotency is not changed
- pip_empty_version_string_idempotency is not changed
# test version specifiers
- name: make sure no test_package installed now
pip:
name: "{{ pip_test_packages }}"
state: absent
- name: install package with version specifiers
pip:
name: "{{ pip_test_package }}"
version: "<100,!=1.0,>0.0.0"
register: version
- name: assert package installed correctly
assert:
that: "version.changed"
- name: reinstall package
pip:
name: "{{ pip_test_package }}"
version: "<100,!=1.0,>0.0.0"
register: version2
- name: assert no changes ocurred
assert:
that: "not version2.changed"
- name: test the check_mod
pip:
name: "{{ pip_test_package }}"
version: "<100,!=1.0,>0.0.0"
check_mode: yes
register: version3
- name: assert no changes
assert:
that: "not version3.changed"
- name: test the check_mod with unsatisfied version
pip:
name: "{{ pip_test_package }}"
version: ">100.0.0"
check_mode: yes
register: version4
- name: assert changed
assert:
that: "version4.changed"
- name: uninstall test packages for next test
pip:
name: "{{ pip_test_packages }}"
state: absent
- name: test invalid combination of arguments
pip:
name: "{{ pip_test_pkg_ver }}"
version: "1.11.1"
ignore_errors: yes
register: version5
- name: assert the invalid combination should fail
assert:
that: "version5 is failed"
- name: another invalid combination of arguments
pip:
name: "{{ pip_test_pkg_ver[0] }}"
version: "<100.0.0"
ignore_errors: yes
register: version6
- name: assert invalid combination should fail
assert:
that: "version6 is failed"
- name: try to install invalid package
pip:
name: "{{ pip_test_pkg_ver_unsatisfied }}"
ignore_errors: yes
register: version7
- name: assert install should fail
assert:
that: "version7 is failed"
- name: test install multi-packages with version specifiers
pip:
name: "{{ pip_test_pkg_ver }}"
register: version8
- name: assert packages installed correctly
assert:
that: "version8.changed"
- name: test install multi-packages with check_mode
pip:
name: "{{ pip_test_pkg_ver }}"
check_mode: yes
register: version9
- name: assert no change
assert:
that: "not version9.changed"
- name: test install unsatisfied multi-packages with check_mode
pip:
name: "{{ pip_test_pkg_ver_unsatisfied }}"
check_mode: yes
register: version10
- name: assert changes needed
assert:
that: "version10.changed"
- name: uninstall packages for next test
pip:
name: "{{ pip_test_packages }}"
state: absent
- name: test install multi package provided by one single string
pip:
name: "{{pip_test_pkg_ver[0]}},{{pip_test_pkg_ver[1]}}"
register: version11
- name: assert the install ran correctly
assert:
that: "version11.changed"
- name: test install multi package provided by one single string with check_mode
pip:
name: "{{pip_test_pkg_ver[0]}},{{pip_test_pkg_ver[1]}}"
check_mode: yes
register: version12
- name: assert no changes needed
assert:
that: "not version12.changed"
- name: test module can parse the combination of multi-packages one line and git url
pip:
name:
- git+https://github.com/dvarrazzo/pyiso8601#egg=iso8601
- "{{pip_test_pkg_ver[0]}},{{pip_test_pkg_ver[1]}}"
- name: test the invalid package name
pip:
name: djan=+-~!@#$go>1.11.1,<1.11.3
ignore_errors: yes
register: version13
- name: the invalid package should make module failed
assert:
that: "version13 is failed"
- name: try install package with setuptools extras
pip:
name:
- "{{pip_test_package}}[test]"
- name: clean up
pip:
name: "{{ pip_test_packages }}"
state: absent
# https://github.com/ansible/ansible/issues/47198
# distribute is a legacy package that will fail on newer Python 3 versions
- block:
- name: make sure the virtualenv does not exist
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
- name: install distribute in the virtualenv
pip:
# using -c for constraints is not supported as long as tests are executed using the centos6 container
# since the pip version in the venv is not upgraded and is too old (6.0.8)
name:
- distribute
- setuptools<45 # setuptools 45 and later require python 3.5 or later
virtualenv: "{{ remote_tmp_dir }}/pipenv"
state: present
- name: try to remove distribute
pip:
state: "absent"
name: "distribute"
virtualenv: "{{ remote_tmp_dir }}/pipenv"
ignore_errors: yes
register: remove_distribute
- name: inspect the cmd
assert:
that: "'distribute' in remove_distribute.cmd"
when: ansible_python.version.major == 2
### test virtualenv_command begin ###
- name: Test virtualenv command with arguments
when: ansible_python.version.major == 2
block:
- name: make sure the virtualenv does not exist
file:
state: absent
name: "{{ remote_tmp_dir }}/pipenv"
# ref: https://github.com/ansible/ansible/issues/52275
- name: install using virtualenv_command with arguments
pip:
name: "{{ pip_test_package }}"
virtualenv: "{{ remote_tmp_dir }}/pipenv"
virtualenv_command: "{{ command.stdout_lines[0] | basename }} --verbose"
state: present
register: version13
- name: ensure install using virtualenv_command with arguments was successful
assert:
that:
- "version13 is success"
### test virtualenv_command end ###
# https://github.com/ansible/ansible/issues/68592
# Handle pre-release version numbers in check_mode for already-installed
# packages.
- block:
- name: Install a pre-release version of a package
pip:
name: fallible
version: 0.0.1a2
state: present
- name: Use check_mode and ensure that the package is shown as installed
check_mode: true
pip:
name: fallible
state: present
register: pip_prereleases
- name: Uninstall the pre-release package if we need to
pip:
name: fallible
version: 0.0.1a2
state: absent
when: pip_prereleases is changed
- assert:
that:
- pip_prereleases is successful
- pip_prereleases is not changed
- '"fallible==0.0.1a2" in pip_prereleases.stdout_lines'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,410 |
Add FreeBSD 13.2 to ansible-test
|
### Summary
FreeBSD 13.2 is [expected](https://www.freebsd.org/releases/13.2R/schedule/) to be released on April 11th. This is a remote VM addition.
FreeBSD 13.2 has been [released](https://www.freebsd.org/releases/13.2R/announce/).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80410
|
https://github.com/ansible/ansible/pull/80698
|
2cd1744be3d96f1eec674e1b66433c3730caa24f
|
d12aa7f69cefddf8b849a93186d4afd8e6615bc5
| 2023-04-05T21:27:37Z |
python
| 2023-05-03T17:24:53Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:3.0.0
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- test: 5
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.7 py36
test: rhel/[email protected]
- name: RHEL 8.7 py39
test: rhel/[email protected]
- name: RHEL 9.1
test: rhel/9.1
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.1
test: freebsd/13.1
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.1
test: rhel/9.1
- name: FreeBSD 13.1
test: freebsd/13.1
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.1
test: rhel/9.1
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,410 |
Add FreeBSD 13.2 to ansible-test
|
### Summary
FreeBSD 13.2 is [expected](https://www.freebsd.org/releases/13.2R/schedule/) to be released on April 11th. This is a remote VM addition.
FreeBSD 13.2 has been [released](https://www.freebsd.org/releases/13.2R/announce/).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80410
|
https://github.com/ansible/ansible/pull/80698
|
2cd1744be3d96f1eec674e1b66433c3730caa24f
|
d12aa7f69cefddf8b849a93186d4afd8e6615bc5
| 2023-04-05T21:27:37Z |
python
| 2023-05-03T17:24:53Z |
changelogs/fragments/ci_freebsd_new.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,410 |
Add FreeBSD 13.2 to ansible-test
|
### Summary
FreeBSD 13.2 is [expected](https://www.freebsd.org/releases/13.2R/schedule/) to be released on April 11th. This is a remote VM addition.
FreeBSD 13.2 has been [released](https://www.freebsd.org/releases/13.2R/announce/).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80410
|
https://github.com/ansible/ansible/pull/80698
|
2cd1744be3d96f1eec674e1b66433c3730caa24f
|
d12aa7f69cefddf8b849a93186d4afd8e6615bc5
| 2023-04-05T21:27:37Z |
python
| 2023-05-03T17:24:53Z |
test/lib/ansible_test/_data/completion/remote.txt
|
alpine/3.17 python=3.10 become=doas_sudo provider=aws arch=x86_64
alpine become=doas_sudo provider=aws arch=x86_64
fedora/37 python=3.11 become=sudo provider=aws arch=x86_64
fedora become=sudo provider=aws arch=x86_64
freebsd/12.4 python=3.9 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.1 python=3.8,3.7,3.9,3.10 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
macos/13.2 python=3.11 python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
macos python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
rhel/7.9 python=2.7 become=sudo provider=aws arch=x86_64
rhel/8.7 python=3.6,3.8,3.9 become=sudo provider=aws arch=x86_64
rhel/9.1 python=3.9 become=sudo provider=aws arch=x86_64
rhel become=sudo provider=aws arch=x86_64
ubuntu/20.04 python=3.8,3.9 become=sudo provider=aws arch=x86_64
ubuntu/22.04 python=3.10 become=sudo provider=aws arch=x86_64
ubuntu become=sudo provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,605 |
Quoted strings get concatenated when using jinja call block with callback
|
### Summary
Under certain conditions, using a `template` task with a [jinja call block and a callback](https://jinja.palletsprojects.com/en/3.1.x/templates/#call) that returns a list of quoted strings, the inner quotes are collapsed/removed and the list of quoted strings is turned into a single quoted concatenated string. So instead of `"a" "b"` the resulting file contains `"ab"`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /home/user/exchange/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/user/repro/ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/user/repro/ansible.cfg) = True
```
### OS / Environment
Initially encountered on ArchLinux with the distribution-packages. Later confirmed using the example below on ArchLinux with the version from PyPI and the `devel` branch.
### Steps to Reproduce
ansible.cfg
```ini
[defaults]
jinja2_native = yes
```
playbook.yml
```yaml
---
- hosts: localhost
tasks:
- template:
src: "{{ 'template.j2' }}"
dest: "/tmp/output"
```
template.j2
```jinja
#jinja2: foo:True
{% macro my_macro() %}
{{ caller() }}
{% endmacro %}
{% call my_macro() -%}
"a" "b"
{% endcall %}
```
In a directory with those three files, execute
```
$ ansible-playbook playbook.yml
```
A few observations I made while reducing the original setup to the minimal example above:
* If you turn off the jinja native mode by setting `jinja2_native = no` in `ansible.cfg`, it works as expected. This is contrary to the [template module docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#notes) which state "The `jinja2_native` setting has no effect."
* If the definition of the `template` task in the playbook does not contain a jinja expression, it works as expected. This is the reason why in the example `src` is an expression with a constant instead of just a plaintext value.
* If the template does not contain a jinja override (the first `#jinja2` line), it works as expected. You get an error if you don't set at least one override, so in the example I set a unused `foo` value to make it effectively a no-op.
* If the `-` whitespace modifier is removed from the end-delimiter of the call start-statement in the template, it works as expected.
* It does not matter what type of quotes (single or double), what whitespace (space or tabs) between them, or how many elements or whitespace between elements: they all get collapsed. `"a" "b" "c" "d" 'e' 'f' "g" 'h'` in the template results in `"abcdefgh"` in the file. It basically behaves like [python's string literal concatenation](https://docs.python.org/3/reference/lexical_analysis.html#string-literal-concatenation).
* Putting anything but whitespace between the quoted strings makes it work as expected, e.g. `"a"+ "b"` in the template will be written exactly like that into the file.
### Expected Results
A file `/tmp/output` is created that contains `"a" "b"` (plus some whitespace around it, but that's not really important here).
```
$ ansible-playbook playbook.yml -D
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-209txmb3l0y/tmp4vuswx84/template.j2
@@ -0,0 +1,4 @@
+
+
+ "a" "b"
+
changed: [localhost]
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml -D -vvvv
ansible-playbook [core 2.14.4]
config file = /home/user/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible-playbook
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /home/user/repro/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" && echo ansible-tmp-1682203193.5547385-136-66384164090029="` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmp3htqbzr4 TO /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" && echo ansible-tmp-1682203194.4804294-179-10085099603156="` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/stat.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpal8urkxn TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/file.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpd272p4fp TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2 TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/copy.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpmvc5jmva TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2
@@ -0,0 +1,3 @@
+
+
+ "ab"
changed: [localhost] => {
"changed": true,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"dest": "/tmp/output",
"diff": [
{
"after": "\n\n \"ab\"\n",
"after_header": "/home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2",
"before": ""
}
],
"gid": 1000,
"group": "user",
"invocation": {
"module_args": {
"_original_basename": "template.j2",
"attributes": null,
"backup": false,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"content": null,
"dest": "/tmp/output",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"unsafe_writes": false,
"validate": null
}
},
"md5sum": "daa70962b93278078ee3b9b6825bd9fd",
"mode": "0644",
"owner": "user",
"size": 9,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"state": "file",
"uid": 1000
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat /tmp/output
"ab"
$
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80605
|
https://github.com/ansible/ansible/pull/80705
|
7eada15d1e9470e010f1c13b52450b01d8e46930
|
8cd95a8e664ccd634dc3a95642ef7ad41f007169
| 2023-04-22T23:12:32Z |
python
| 2023-05-04T12:55:27Z |
changelogs/fragments/80605-template-overlay-native-jinja.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,605 |
Quoted strings get concatenated when using jinja call block with callback
|
### Summary
Under certain conditions, using a `template` task with a [jinja call block and a callback](https://jinja.palletsprojects.com/en/3.1.x/templates/#call) that returns a list of quoted strings, the inner quotes are collapsed/removed and the list of quoted strings is turned into a single quoted concatenated string. So instead of `"a" "b"` the resulting file contains `"ab"`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /home/user/exchange/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/user/repro/ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/user/repro/ansible.cfg) = True
```
### OS / Environment
Initially encountered on ArchLinux with the distribution-packages. Later confirmed using the example below on ArchLinux with the version from PyPI and the `devel` branch.
### Steps to Reproduce
ansible.cfg
```ini
[defaults]
jinja2_native = yes
```
playbook.yml
```yaml
---
- hosts: localhost
tasks:
- template:
src: "{{ 'template.j2' }}"
dest: "/tmp/output"
```
template.j2
```jinja
#jinja2: foo:True
{% macro my_macro() %}
{{ caller() }}
{% endmacro %}
{% call my_macro() -%}
"a" "b"
{% endcall %}
```
In a directory with those three files, execute
```
$ ansible-playbook playbook.yml
```
A few observations I made while reducing the original setup to the minimal example above:
* If you turn off the jinja native mode by setting `jinja2_native = no` in `ansible.cfg`, it works as expected. This is contrary to the [template module docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#notes) which state "The `jinja2_native` setting has no effect."
* If the definition of the `template` task in the playbook does not contain a jinja expression, it works as expected. This is the reason why in the example `src` is an expression with a constant instead of just a plaintext value.
* If the template does not contain a jinja override (the first `#jinja2` line), it works as expected. You get an error if you don't set at least one override, so in the example I set a unused `foo` value to make it effectively a no-op.
* If the `-` whitespace modifier is removed from the end-delimiter of the call start-statement in the template, it works as expected.
* It does not matter what type of quotes (single or double), what whitespace (space or tabs) between them, or how many elements or whitespace between elements: they all get collapsed. `"a" "b" "c" "d" 'e' 'f' "g" 'h'` in the template results in `"abcdefgh"` in the file. It basically behaves like [python's string literal concatenation](https://docs.python.org/3/reference/lexical_analysis.html#string-literal-concatenation).
* Putting anything but whitespace between the quoted strings makes it work as expected, e.g. `"a"+ "b"` in the template will be written exactly like that into the file.
### Expected Results
A file `/tmp/output` is created that contains `"a" "b"` (plus some whitespace around it, but that's not really important here).
```
$ ansible-playbook playbook.yml -D
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-209txmb3l0y/tmp4vuswx84/template.j2
@@ -0,0 +1,4 @@
+
+
+ "a" "b"
+
changed: [localhost]
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml -D -vvvv
ansible-playbook [core 2.14.4]
config file = /home/user/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible-playbook
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /home/user/repro/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" && echo ansible-tmp-1682203193.5547385-136-66384164090029="` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmp3htqbzr4 TO /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" && echo ansible-tmp-1682203194.4804294-179-10085099603156="` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/stat.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpal8urkxn TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/file.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpd272p4fp TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2 TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/copy.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpmvc5jmva TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2
@@ -0,0 +1,3 @@
+
+
+ "ab"
changed: [localhost] => {
"changed": true,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"dest": "/tmp/output",
"diff": [
{
"after": "\n\n \"ab\"\n",
"after_header": "/home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2",
"before": ""
}
],
"gid": 1000,
"group": "user",
"invocation": {
"module_args": {
"_original_basename": "template.j2",
"attributes": null,
"backup": false,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"content": null,
"dest": "/tmp/output",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"unsafe_writes": false,
"validate": null
}
},
"md5sum": "daa70962b93278078ee3b9b6825bd9fd",
"mode": "0644",
"owner": "user",
"size": 9,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"state": "file",
"uid": 1000
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat /tmp/output
"ab"
$
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80605
|
https://github.com/ansible/ansible/pull/80705
|
7eada15d1e9470e010f1c13b52450b01d8e46930
|
8cd95a8e664ccd634dc3a95642ef7ad41f007169
| 2023-04-22T23:12:32Z |
python
| 2023-05-04T12:55:27Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve_or_missing() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
''' Simulated dict class that loads Jinja2Plugins at request
otherwise all plugins would need to be loaded a priori.
NOTE: plugin_loader still loads all 'builtin/legacy' at
start so only collection plugins are really at request.
'''
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._pluginloader = pluginloader
# cache of resolved plugins
self._delegatee = delegatee
# track loaded plugins here as cache above includes 'jinja2' filters but ours should override
self._loaded_builtins = set()
def __getitem__(self, key):
if not isinstance(key, string_types):
raise ValueError('key must be a string, got %s instead' % type(key))
original_exc = None
if key not in self._loaded_builtins:
plugin = None
try:
plugin = self._pluginloader.get(key)
except (AnsibleError, KeyError) as e:
original_exc = e
except Exception as e:
display.vvvv('Unexpected plugin load (%s) exception: %s' % (key, to_native(e)))
raise e
# if a plugin was found/loaded
if plugin:
# set in filter cache and avoid expensive plugin load
self._delegatee[key] = plugin.j2_function
self._loaded_builtins.add(key)
# raise template syntax error if we could not find ours or jinja2 one
try:
func = self._delegatee[key]
except KeyError as e:
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(original_exc or e)), 0)
# if i do have func and it is a filter, it nees wrapping
if self._pluginloader.type == 'filter':
# filter need wrapping
if key in C.STRING_TYPE_FILTERS:
# avoid litera_eval when you WANT strings
func = _wrap_native_text(func)
else:
# conditionally unroll iterators/generators to avoid having to use `|list` after every filter
func = _unroll_iterator(func)
return func
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def _fail_on_undefined(data):
"""Recursively find an undefined value in a nested data structure
and properly raise the undefined exception.
"""
if isinstance(data, Mapping):
for value in data.values():
_fail_on_undefined(value)
elif is_sequence(data):
for item in data:
_fail_on_undefined(item)
else:
if isinstance(data, StrictUndefined):
# To actually raise the undefined exception we need to
# access the undefined object otherwise the exception would
# be raised on the next access which might not be properly
# handled.
# See https://github.com/ansible/ansible/issues/52158
# and StrictUndefined implementation in upstream Jinja2.
str(data)
return data
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if _fail_on_undefined(thing) is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat) # type: ignore[assignment]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat) # type: ignore[assignment]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(_fail_on_undefined)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, variables=None):
self._loader = loader
self._available_variables = {} if variables is None else variables
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=None, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
if cache is not None:
display.deprecated("The `cache` option to `Templar.template` is no longer functional, and will be removed in a future release.", version='2.18')
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, /, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, /, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if not is_sequence(ran):
display.deprecated(
f'The lookup plugin \'{name}\' was expected to return a list, got \'{type(ran)}\' instead. '
f'The lookup plugin \'{name}\' needs to be changed to return a list. '
'This will be an error in Ansible 2.18',
version='2.18'
)
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
except KeyError:
# Lookup Plugin returned a dict. Return comma-separated string of keys
# for backwards compat.
# FIXME this can be removed when support for non-list return types is removed.
# See https://github.com/ansible/ansible/pull/77789
ran = wrap_var(",".join(ran))
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)), orig_exc=e)
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data), orig_exc=e)
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
# In case this is a recursive call to do_template we need to
# save/restore cur_context to prevent overriding __UNSAFE__.
cached_context = self.cur_context
# In case this is a recursive call and we set different concat
# function up the stack, reset it in case the value of convert_data
# changed in this call
self.environment.concat = self.environment.__class__.concat
# the concat function is set for each Ansible environment,
# however for convert_data=False we need to use the concat
# function that avoids any evaluation and set it temporarily
# on the environment so it is used correctly even when
# the concat function is called internally in Jinja,
# most notably for macro execution
if not self.jinja2_native and not convert_data:
self.environment.concat = ansible_concat
self.cur_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(self.cur_context)
try:
res = self.environment.concat(rf)
unsafe = getattr(self.cur_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg, orig_exc=te)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)), orig_exc=te)
finally:
self.cur_context = cached_context
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e, orig_exc=e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,605 |
Quoted strings get concatenated when using jinja call block with callback
|
### Summary
Under certain conditions, using a `template` task with a [jinja call block and a callback](https://jinja.palletsprojects.com/en/3.1.x/templates/#call) that returns a list of quoted strings, the inner quotes are collapsed/removed and the list of quoted strings is turned into a single quoted concatenated string. So instead of `"a" "b"` the resulting file contains `"ab"`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /home/user/exchange/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/user/repro/ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/user/repro/ansible.cfg) = True
```
### OS / Environment
Initially encountered on ArchLinux with the distribution-packages. Later confirmed using the example below on ArchLinux with the version from PyPI and the `devel` branch.
### Steps to Reproduce
ansible.cfg
```ini
[defaults]
jinja2_native = yes
```
playbook.yml
```yaml
---
- hosts: localhost
tasks:
- template:
src: "{{ 'template.j2' }}"
dest: "/tmp/output"
```
template.j2
```jinja
#jinja2: foo:True
{% macro my_macro() %}
{{ caller() }}
{% endmacro %}
{% call my_macro() -%}
"a" "b"
{% endcall %}
```
In a directory with those three files, execute
```
$ ansible-playbook playbook.yml
```
A few observations I made while reducing the original setup to the minimal example above:
* If you turn off the jinja native mode by setting `jinja2_native = no` in `ansible.cfg`, it works as expected. This is contrary to the [template module docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#notes) which state "The `jinja2_native` setting has no effect."
* If the definition of the `template` task in the playbook does not contain a jinja expression, it works as expected. This is the reason why in the example `src` is an expression with a constant instead of just a plaintext value.
* If the template does not contain a jinja override (the first `#jinja2` line), it works as expected. You get an error if you don't set at least one override, so in the example I set a unused `foo` value to make it effectively a no-op.
* If the `-` whitespace modifier is removed from the end-delimiter of the call start-statement in the template, it works as expected.
* It does not matter what type of quotes (single or double), what whitespace (space or tabs) between them, or how many elements or whitespace between elements: they all get collapsed. `"a" "b" "c" "d" 'e' 'f' "g" 'h'` in the template results in `"abcdefgh"` in the file. It basically behaves like [python's string literal concatenation](https://docs.python.org/3/reference/lexical_analysis.html#string-literal-concatenation).
* Putting anything but whitespace between the quoted strings makes it work as expected, e.g. `"a"+ "b"` in the template will be written exactly like that into the file.
### Expected Results
A file `/tmp/output` is created that contains `"a" "b"` (plus some whitespace around it, but that's not really important here).
```
$ ansible-playbook playbook.yml -D
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-209txmb3l0y/tmp4vuswx84/template.j2
@@ -0,0 +1,4 @@
+
+
+ "a" "b"
+
changed: [localhost]
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml -D -vvvv
ansible-playbook [core 2.14.4]
config file = /home/user/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible-playbook
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /home/user/repro/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" && echo ansible-tmp-1682203193.5547385-136-66384164090029="` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmp3htqbzr4 TO /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" && echo ansible-tmp-1682203194.4804294-179-10085099603156="` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/stat.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpal8urkxn TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/file.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpd272p4fp TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2 TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/copy.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpmvc5jmva TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2
@@ -0,0 +1,3 @@
+
+
+ "ab"
changed: [localhost] => {
"changed": true,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"dest": "/tmp/output",
"diff": [
{
"after": "\n\n \"ab\"\n",
"after_header": "/home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2",
"before": ""
}
],
"gid": 1000,
"group": "user",
"invocation": {
"module_args": {
"_original_basename": "template.j2",
"attributes": null,
"backup": false,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"content": null,
"dest": "/tmp/output",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"unsafe_writes": false,
"validate": null
}
},
"md5sum": "daa70962b93278078ee3b9b6825bd9fd",
"mode": "0644",
"owner": "user",
"size": 9,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"state": "file",
"uid": 1000
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat /tmp/output
"ab"
$
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80605
|
https://github.com/ansible/ansible/pull/80705
|
7eada15d1e9470e010f1c13b52450b01d8e46930
|
8cd95a8e664ccd634dc3a95642ef7ad41f007169
| 2023-04-22T23:12:32Z |
python
| 2023-05-04T12:55:27Z |
test/integration/targets/template_jinja2_non_native/macro_override.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,605 |
Quoted strings get concatenated when using jinja call block with callback
|
### Summary
Under certain conditions, using a `template` task with a [jinja call block and a callback](https://jinja.palletsprojects.com/en/3.1.x/templates/#call) that returns a list of quoted strings, the inner quotes are collapsed/removed and the list of quoted strings is turned into a single quoted concatenated string. So instead of `"a" "b"` the resulting file contains `"ab"`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /home/user/exchange/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/user/repro/ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/user/repro/ansible.cfg) = True
```
### OS / Environment
Initially encountered on ArchLinux with the distribution-packages. Later confirmed using the example below on ArchLinux with the version from PyPI and the `devel` branch.
### Steps to Reproduce
ansible.cfg
```ini
[defaults]
jinja2_native = yes
```
playbook.yml
```yaml
---
- hosts: localhost
tasks:
- template:
src: "{{ 'template.j2' }}"
dest: "/tmp/output"
```
template.j2
```jinja
#jinja2: foo:True
{% macro my_macro() %}
{{ caller() }}
{% endmacro %}
{% call my_macro() -%}
"a" "b"
{% endcall %}
```
In a directory with those three files, execute
```
$ ansible-playbook playbook.yml
```
A few observations I made while reducing the original setup to the minimal example above:
* If you turn off the jinja native mode by setting `jinja2_native = no` in `ansible.cfg`, it works as expected. This is contrary to the [template module docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#notes) which state "The `jinja2_native` setting has no effect."
* If the definition of the `template` task in the playbook does not contain a jinja expression, it works as expected. This is the reason why in the example `src` is an expression with a constant instead of just a plaintext value.
* If the template does not contain a jinja override (the first `#jinja2` line), it works as expected. You get an error if you don't set at least one override, so in the example I set a unused `foo` value to make it effectively a no-op.
* If the `-` whitespace modifier is removed from the end-delimiter of the call start-statement in the template, it works as expected.
* It does not matter what type of quotes (single or double), what whitespace (space or tabs) between them, or how many elements or whitespace between elements: they all get collapsed. `"a" "b" "c" "d" 'e' 'f' "g" 'h'` in the template results in `"abcdefgh"` in the file. It basically behaves like [python's string literal concatenation](https://docs.python.org/3/reference/lexical_analysis.html#string-literal-concatenation).
* Putting anything but whitespace between the quoted strings makes it work as expected, e.g. `"a"+ "b"` in the template will be written exactly like that into the file.
### Expected Results
A file `/tmp/output` is created that contains `"a" "b"` (plus some whitespace around it, but that's not really important here).
```
$ ansible-playbook playbook.yml -D
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-209txmb3l0y/tmp4vuswx84/template.j2
@@ -0,0 +1,4 @@
+
+
+ "a" "b"
+
changed: [localhost]
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml -D -vvvv
ansible-playbook [core 2.14.4]
config file = /home/user/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible-playbook
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /home/user/repro/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" && echo ansible-tmp-1682203193.5547385-136-66384164090029="` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmp3htqbzr4 TO /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" && echo ansible-tmp-1682203194.4804294-179-10085099603156="` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/stat.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpal8urkxn TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/file.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpd272p4fp TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2 TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/copy.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpmvc5jmva TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2
@@ -0,0 +1,3 @@
+
+
+ "ab"
changed: [localhost] => {
"changed": true,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"dest": "/tmp/output",
"diff": [
{
"after": "\n\n \"ab\"\n",
"after_header": "/home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2",
"before": ""
}
],
"gid": 1000,
"group": "user",
"invocation": {
"module_args": {
"_original_basename": "template.j2",
"attributes": null,
"backup": false,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"content": null,
"dest": "/tmp/output",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"unsafe_writes": false,
"validate": null
}
},
"md5sum": "daa70962b93278078ee3b9b6825bd9fd",
"mode": "0644",
"owner": "user",
"size": 9,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"state": "file",
"uid": 1000
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat /tmp/output
"ab"
$
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80605
|
https://github.com/ansible/ansible/pull/80705
|
7eada15d1e9470e010f1c13b52450b01d8e46930
|
8cd95a8e664ccd634dc3a95642ef7ad41f007169
| 2023-04-22T23:12:32Z |
python
| 2023-05-04T12:55:27Z |
test/integration/targets/template_jinja2_non_native/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_JINJA2_NATIVE=1
ansible-playbook 46169.yml -v "$@"
unset ANSIBLE_JINJA2_NATIVE
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,605 |
Quoted strings get concatenated when using jinja call block with callback
|
### Summary
Under certain conditions, using a `template` task with a [jinja call block and a callback](https://jinja.palletsprojects.com/en/3.1.x/templates/#call) that returns a list of quoted strings, the inner quotes are collapsed/removed and the list of quoted strings is turned into a single quoted concatenated string. So instead of `"a" "b"` the resulting file contains `"ab"`.
### Issue Type
Bug Report
### Component Name
template
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /home/user/exchange/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/user/repro/ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/user/repro/ansible.cfg) = True
```
### OS / Environment
Initially encountered on ArchLinux with the distribution-packages. Later confirmed using the example below on ArchLinux with the version from PyPI and the `devel` branch.
### Steps to Reproduce
ansible.cfg
```ini
[defaults]
jinja2_native = yes
```
playbook.yml
```yaml
---
- hosts: localhost
tasks:
- template:
src: "{{ 'template.j2' }}"
dest: "/tmp/output"
```
template.j2
```jinja
#jinja2: foo:True
{% macro my_macro() %}
{{ caller() }}
{% endmacro %}
{% call my_macro() -%}
"a" "b"
{% endcall %}
```
In a directory with those three files, execute
```
$ ansible-playbook playbook.yml
```
A few observations I made while reducing the original setup to the minimal example above:
* If you turn off the jinja native mode by setting `jinja2_native = no` in `ansible.cfg`, it works as expected. This is contrary to the [template module docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#notes) which state "The `jinja2_native` setting has no effect."
* If the definition of the `template` task in the playbook does not contain a jinja expression, it works as expected. This is the reason why in the example `src` is an expression with a constant instead of just a plaintext value.
* If the template does not contain a jinja override (the first `#jinja2` line), it works as expected. You get an error if you don't set at least one override, so in the example I set a unused `foo` value to make it effectively a no-op.
* If the `-` whitespace modifier is removed from the end-delimiter of the call start-statement in the template, it works as expected.
* It does not matter what type of quotes (single or double), what whitespace (space or tabs) between them, or how many elements or whitespace between elements: they all get collapsed. `"a" "b" "c" "d" 'e' 'f' "g" 'h'` in the template results in `"abcdefgh"` in the file. It basically behaves like [python's string literal concatenation](https://docs.python.org/3/reference/lexical_analysis.html#string-literal-concatenation).
* Putting anything but whitespace between the quoted strings makes it work as expected, e.g. `"a"+ "b"` in the template will be written exactly like that into the file.
### Expected Results
A file `/tmp/output` is created that contains `"a" "b"` (plus some whitespace around it, but that's not really important here).
```
$ ansible-playbook playbook.yml -D
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-209txmb3l0y/tmp4vuswx84/template.j2
@@ -0,0 +1,4 @@
+
+
+ "a" "b"
+
changed: [localhost]
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook playbook.yml -D -vvvv
ansible-playbook [core 2.14.4]
config file = /home/user/repro/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/sbin/ansible-playbook
python version = 3.10.10 (main, Mar 5 2023, 22:26:53) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
Using /home/user/repro/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.10/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook.yml **************************************************************************************************************************************************************************************************************************************************************
Positional arguments: playbook.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
diff: True
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook.yml
PLAY [localhost] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" && echo ansible-tmp-1682203193.5547385-136-66384164090029="` echo /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmp3htqbzr4 TO /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203193.5547385-136-66384164090029/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [template] *********************************************************************************************************************************************************************************************************************************************************************
task path: /home/user/repro/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: user
<127.0.0.1> EXEC /bin/sh -c 'echo ~user && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/user/.ansible/tmp `"&& mkdir "` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" && echo ansible-tmp-1682203194.4804294-179-10085099603156="` echo /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156 `" ) && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/stat.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpal8urkxn TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_stat.py && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/file.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpd272p4fp TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_file.py && sleep 0'
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2 TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source && sleep 0'
Using module file /usr/lib/python3.10/site-packages/ansible/modules/copy.py
<127.0.0.1> PUT /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpmvc5jmva TO /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/AnsiballZ_copy.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/ > /dev/null 2>&1 && sleep 0'
--- before
+++ after: /home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2
@@ -0,0 +1,3 @@
+
+
+ "ab"
changed: [localhost] => {
"changed": true,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"dest": "/tmp/output",
"diff": [
{
"after": "\n\n \"ab\"\n",
"after_header": "/home/user/.ansible/tmp/ansible-local-1323lqb200c/tmpwxlecmmr/template.j2",
"before": ""
}
],
"gid": 1000,
"group": "user",
"invocation": {
"module_args": {
"_original_basename": "template.j2",
"attributes": null,
"backup": false,
"checksum": "c8d2785230875caee6e9935c9fe1e63788783d8f",
"content": null,
"dest": "/tmp/output",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"unsafe_writes": false,
"validate": null
}
},
"md5sum": "daa70962b93278078ee3b9b6825bd9fd",
"mode": "0644",
"owner": "user",
"size": 9,
"src": "/home/user/.ansible/tmp/ansible-tmp-1682203194.4804294-179-10085099603156/source",
"state": "file",
"uid": 1000
}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ cat /tmp/output
"ab"
$
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80605
|
https://github.com/ansible/ansible/pull/80705
|
7eada15d1e9470e010f1c13b52450b01d8e46930
|
8cd95a8e664ccd634dc3a95642ef7ad41f007169
| 2023-04-22T23:12:32Z |
python
| 2023-05-04T12:55:27Z |
test/integration/targets/template_jinja2_non_native/templates/macro_override.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,444 |
unit test test_real_path_dev_null fails on OpenIndiana
|
### Summary
The test_real_path_dev_null test fails on OpenIndiana (a Solaris clone) because on illumos (and Solaris) `/dev/null` is a symlink:
```
$ ls -l /dev/null
lrwxrwxrwx 1 root root 27 Mar 31 2015 /dev/null -> ../devices/pseudo/mm@0:null
$
```
### Issue Type
Bug Report
### Component Name
test_vault_editor.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/marcel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/vendor-packages/ansible
ansible collection location = /home/marcel/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.16 (main, Feb 19 2023, 15:42:40) [GCC 10.4.0] (/usr/bin/python3.9)
jinja version = 3.0.3
libyaml = True
$
```
### Configuration
```console
NA. Tests are run directly in the source directory after the source tarball is unpacked.
```
### OS / Environment
OpenIndiana
### Steps to Reproduce
```
$ bin/ansible-test units --python 3.9 --python-interpreter /usr/bin/python3.9 --local --color no --verbose
```
### Expected Results
The `test_real_path_dev_null` test either pass or is skipped.
### Actual Results
```console
___________________ TestVaultEditor.test_real_path_dev_null ____________________
[gw0] sunos5 -- Python 3.9.16 /usr/bin/python3.9
self = <units.parsing.vault.test_vault_editor.TestVaultEditor testMethod=test_real_path_dev_null>
def test_real_path_dev_null(self):
filename = '/dev/null'
ve = self._vault_editor()
res = ve._real_path(filename)
> self.assertEqual(res, '/dev/null')
test/units/parsing/vault/test_vault_editor.py:509:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3.9/vendor-packages/teamcity/diff_tools.py:33: in _patched_equals
old(self, first, second, msg)
E AssertionError: '/devices/pseudo/mm@0:null' != '/dev/null'
E - /devices/pseudo/mm@0:null
E + /dev/null
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80444
|
https://github.com/ansible/ansible/pull/80741
|
4b0d014d5840333457bd118c5fae5cf58325a877
|
7ef8e0e102388ae422b214eccffc381deeecadf1
| 2023-04-06T15:13:04Z |
python
| 2023-05-09T15:22:41Z |
test/units/parsing/vault/test_vault_editor.py
|
# (c) 2014, James Tanner <[email protected]>
# (c) 2014, James Cammarata, <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import tempfile
from io import BytesIO, StringIO
import pytest
from units.compat import unittest
from unittest.mock import patch
from ansible import errors
from ansible.parsing import vault
from ansible.parsing.vault import VaultLib, VaultEditor, match_encrypt_secret
from ansible.module_utils.six import PY3
from ansible.module_utils.common.text.converters import to_bytes, to_text
from units.mock.vault_helper import TextVaultSecret
v11_data = """$ANSIBLE_VAULT;1.1;AES256
62303130653266653331306264616235333735323636616539316433666463323964623162386137
3961616263373033353631316333623566303532663065310a393036623466376263393961326530
64336561613965383835646464623865663966323464653236343638373165343863623638316664
3631633031323837340a396530313963373030343933616133393566366137363761373930663833
3739"""
@pytest.mark.skipif(not vault.HAS_CRYPTOGRAPHY,
reason="Skipping cryptography tests because cryptography is not installed")
class TestVaultEditor(unittest.TestCase):
def setUp(self):
self._test_dir = None
self.vault_password = "test-vault-password"
vault_secret = TextVaultSecret(self.vault_password)
self.vault_secrets = [('vault_secret', vault_secret),
('default', vault_secret)]
@property
def vault_secret(self):
return match_encrypt_secret(self.vault_secrets)[1]
def tearDown(self):
if self._test_dir:
pass
# shutil.rmtree(self._test_dir)
self._test_dir = None
def _secrets(self, password):
vault_secret = TextVaultSecret(password)
vault_secrets = [('default', vault_secret)]
return vault_secrets
def test_methods_exist(self):
v = vault.VaultEditor(None)
slots = ['create_file',
'decrypt_file',
'edit_file',
'encrypt_file',
'rekey_file',
'read_data',
'write_data']
for slot in slots:
assert hasattr(v, slot), "VaultLib is missing the %s method" % slot
def _create_test_dir(self):
suffix = '_ansible_unit_test_%s_' % (self.__class__.__name__)
return tempfile.mkdtemp(suffix=suffix)
def _create_file(self, test_dir, name, content=None, symlink=False):
file_path = os.path.join(test_dir, name)
with open(file_path, 'wb') as opened_file:
if content:
opened_file.write(content)
return file_path
def _vault_editor(self, vault_secrets=None):
if vault_secrets is None:
vault_secrets = self._secrets(self.vault_password)
return VaultEditor(VaultLib(vault_secrets))
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_helper_empty_target(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
mock_sp_call.side_effect = self._faux_command
ve = self._vault_editor()
b_ciphertext = ve._edit_file_helper(src_file_path, self.vault_secret)
self.assertNotEqual(src_contents, b_ciphertext)
def test_stdin_binary(self):
stdin_data = '\0'
if PY3:
fake_stream = StringIO(stdin_data)
fake_stream.buffer = BytesIO(to_bytes(stdin_data))
else:
fake_stream = BytesIO(to_bytes(stdin_data))
with patch('sys.stdin', fake_stream):
ve = self._vault_editor()
data = ve.read_data('-')
self.assertEqual(data, b'\0')
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_helper_call_exception(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
error_txt = 'calling editor raised an exception'
mock_sp_call.side_effect = errors.AnsibleError(error_txt)
ve = self._vault_editor()
self.assertRaisesRegex(errors.AnsibleError,
error_txt,
ve._edit_file_helper,
src_file_path,
self.vault_secret)
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_helper_symlink_target(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
src_file_link_path = os.path.join(self._test_dir, 'a_link_to_dest_file')
os.symlink(src_file_path, src_file_link_path)
mock_sp_call.side_effect = self._faux_command
ve = self._vault_editor()
b_ciphertext = ve._edit_file_helper(src_file_link_path, self.vault_secret)
self.assertNotEqual(src_file_contents, b_ciphertext,
'b_ciphertext should be encrypted and not equal to src_contents')
def _faux_editor(self, editor_args, new_src_contents=None):
if editor_args[0] == 'shred':
return
tmp_path = editor_args[-1]
# simulate the tmp file being editted
with open(tmp_path, 'wb') as tmp_file:
if new_src_contents:
tmp_file.write(new_src_contents)
def _faux_command(self, tmp_path):
pass
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_helper_no_change(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
# editor invocation doesn't change anything
def faux_editor(editor_args):
self._faux_editor(editor_args, src_file_contents)
mock_sp_call.side_effect = faux_editor
ve = self._vault_editor()
ve._edit_file_helper(src_file_path, self.vault_secret, existing_data=src_file_contents)
with open(src_file_path, 'rb') as new_target_file:
new_target_file_contents = new_target_file.read()
self.assertEqual(src_file_contents, new_target_file_contents)
def _assert_file_is_encrypted(self, vault_editor, src_file_path, src_contents):
with open(src_file_path, 'rb') as new_src_file:
new_src_file_contents = new_src_file.read()
# TODO: assert that it is encrypted
self.assertTrue(vault.is_encrypted(new_src_file_contents))
src_file_plaintext = vault_editor.vault.decrypt(new_src_file_contents)
# the plaintext should not be encrypted
self.assertFalse(vault.is_encrypted(src_file_plaintext))
# and the new plaintext should match the original
self.assertEqual(src_file_plaintext, src_contents)
def _assert_file_is_link(self, src_file_link_path, src_file_path):
self.assertTrue(os.path.islink(src_file_link_path),
'The dest path (%s) should be a symlink to (%s) but is not' % (src_file_link_path, src_file_path))
def test_rekey_file(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
# FIXME: update to just set self._secrets or just a new vault secret id
new_password = 'password2:electricbugaloo'
new_vault_secret = TextVaultSecret(new_password)
new_vault_secrets = [('default', new_vault_secret)]
ve.rekey_file(src_file_path, vault.match_encrypt_secret(new_vault_secrets)[1])
# FIXME: can just update self._secrets here
new_ve = vault.VaultEditor(VaultLib(new_vault_secrets))
self._assert_file_is_encrypted(new_ve, src_file_path, src_file_contents)
def test_rekey_file_no_new_password(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
self.assertRaisesRegex(errors.AnsibleError,
'The value for the new_password to rekey',
ve.rekey_file,
src_file_path,
None)
def test_rekey_file_not_encrypted(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
new_password = 'password2:electricbugaloo'
self.assertRaisesRegex(errors.AnsibleError,
'input is not vault encrypted data',
ve.rekey_file,
src_file_path, new_password)
def test_plaintext(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
res = ve.plaintext(src_file_path)
self.assertEqual(src_file_contents, res)
def test_plaintext_not_encrypted(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
self.assertRaisesRegex(errors.AnsibleError,
'input is not vault encrypted data',
ve.plaintext,
src_file_path)
def test_encrypt_file(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
self._assert_file_is_encrypted(ve, src_file_path, src_file_contents)
def test_encrypt_file_symlink(self):
self._test_dir = self._create_test_dir()
src_file_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_file_contents)
src_file_link_path = os.path.join(self._test_dir, 'a_link_to_dest_file')
os.symlink(src_file_path, src_file_link_path)
ve = self._vault_editor()
ve.encrypt_file(src_file_link_path, self.vault_secret)
self._assert_file_is_encrypted(ve, src_file_path, src_file_contents)
self._assert_file_is_encrypted(ve, src_file_link_path, src_file_contents)
self._assert_file_is_link(src_file_link_path, src_file_path)
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_no_vault_id(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
new_src_contents = to_bytes("The info is different now.")
def faux_editor(editor_args):
self._faux_editor(editor_args, new_src_contents)
mock_sp_call.side_effect = faux_editor
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
ve.edit_file(src_file_path)
with open(src_file_path, 'rb') as new_src_file:
new_src_file_contents = new_src_file.read()
self.assertTrue(b'$ANSIBLE_VAULT;1.1;AES256' in new_src_file_contents)
src_file_plaintext = ve.vault.decrypt(new_src_file_contents)
self.assertEqual(src_file_plaintext, new_src_contents)
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_with_vault_id(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
new_src_contents = to_bytes("The info is different now.")
def faux_editor(editor_args):
self._faux_editor(editor_args, new_src_contents)
mock_sp_call.side_effect = faux_editor
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret,
vault_id='vault_secrets')
ve.edit_file(src_file_path)
with open(src_file_path, 'rb') as new_src_file:
new_src_file_contents = new_src_file.read()
self.assertTrue(b'$ANSIBLE_VAULT;1.2;AES256;vault_secrets' in new_src_file_contents)
src_file_plaintext = ve.vault.decrypt(new_src_file_contents)
self.assertEqual(src_file_plaintext, new_src_contents)
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_symlink(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
new_src_contents = to_bytes("The info is different now.")
def faux_editor(editor_args):
self._faux_editor(editor_args, new_src_contents)
mock_sp_call.side_effect = faux_editor
ve = self._vault_editor()
ve.encrypt_file(src_file_path, self.vault_secret)
src_file_link_path = os.path.join(self._test_dir, 'a_link_to_dest_file')
os.symlink(src_file_path, src_file_link_path)
ve.edit_file(src_file_link_path)
with open(src_file_path, 'rb') as new_src_file:
new_src_file_contents = new_src_file.read()
src_file_plaintext = ve.vault.decrypt(new_src_file_contents)
self._assert_file_is_link(src_file_link_path, src_file_path)
self.assertEqual(src_file_plaintext, new_src_contents)
# self.assertEqual(src_file_plaintext, new_src_contents,
# 'The decrypted plaintext of the editted file is not the expected contents.')
@patch('ansible.parsing.vault.subprocess.call')
def test_edit_file_not_encrypted(self, mock_sp_call):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
new_src_contents = to_bytes("The info is different now.")
def faux_editor(editor_args):
self._faux_editor(editor_args, new_src_contents)
mock_sp_call.side_effect = faux_editor
ve = self._vault_editor()
self.assertRaisesRegex(errors.AnsibleError,
'input is not vault encrypted data',
ve.edit_file,
src_file_path)
def test_create_file_exists(self):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
ve = self._vault_editor()
self.assertRaisesRegex(errors.AnsibleError,
'please use .edit. instead',
ve.create_file,
src_file_path,
self.vault_secret)
def test_decrypt_file_exception(self):
self._test_dir = self._create_test_dir()
src_contents = to_bytes("some info in a file\nyup.")
src_file_path = self._create_file(self._test_dir, 'src_file', content=src_contents)
ve = self._vault_editor()
self.assertRaisesRegex(errors.AnsibleError,
'input is not vault encrypted data',
ve.decrypt_file,
src_file_path)
@patch.object(vault.VaultEditor, '_editor_shell_command')
def test_create_file(self, mock_editor_shell_command):
def sc_side_effect(filename):
return ['touch', filename]
mock_editor_shell_command.side_effect = sc_side_effect
tmp_file = tempfile.NamedTemporaryFile()
os.unlink(tmp_file.name)
_secrets = self._secrets('ansible')
ve = self._vault_editor(_secrets)
ve.create_file(tmp_file.name, vault.match_encrypt_secret(_secrets)[1])
self.assertTrue(os.path.exists(tmp_file.name))
def test_decrypt_1_1(self):
v11_file = tempfile.NamedTemporaryFile(delete=False)
with v11_file as f:
f.write(to_bytes(v11_data))
ve = self._vault_editor(self._secrets("ansible"))
# make sure the password functions for the cipher
error_hit = False
try:
ve.decrypt_file(v11_file.name)
except errors.AnsibleError:
error_hit = True
# verify decrypted content
with open(v11_file.name, "rb") as f:
fdata = to_text(f.read())
os.unlink(v11_file.name)
assert error_hit is False, "error decrypting 1.1 file"
assert fdata.strip() == "foo", "incorrect decryption of 1.1 file: %s" % fdata.strip()
def test_real_path_dash(self):
filename = '-'
ve = self._vault_editor()
res = ve._real_path(filename)
self.assertEqual(res, '-')
def test_real_path_dev_null(self):
filename = '/dev/null'
ve = self._vault_editor()
res = ve._real_path(filename)
self.assertEqual(res, '/dev/null')
def test_real_path_symlink(self):
self._test_dir = os.path.realpath(self._create_test_dir())
file_path = self._create_file(self._test_dir, 'test_file', content=b'this is a test file')
file_link_path = os.path.join(self._test_dir, 'a_link_to_test_file')
os.symlink(file_path, file_link_path)
ve = self._vault_editor()
res = ve._real_path(file_link_path)
self.assertEqual(res, file_path)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,128 |
Module fetch fails on windows host with wildcards in filename
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Using fetch to get files from windows hosts fails if the filename can be interpreted as a powershell wildcard. For example "new[12].txt"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
connection winrm.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /home/aniess/ansible/inhouse/ansible.cfg
configured module search path = ['/home/aniess/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aniess/.local/lib/python3.6/site-packages/ansible
executable location = /home/aniess/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/aniess/ansible/inhouse/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/aniess/ansible/inhouse/ansible.cfg) = ./tmp
DEFAULT_GATHERING(/home/aniess/ansible/inhouse/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/aniess/ansible/inhouse/ansible.cfg) = ['/home/aniess/ansible/inhouse/inventory/inventory.yml']
DEFAULT_STDOUT_CALLBACK(/home/aniess/ansible/inhouse/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/aniess/ansible/inhouse/ansible.cfg) = /home/aniess/ansible/inhouse/pw_vault_prod
DISPLAY_ARGS_TO_STDOUT(/home/aniess/ansible/inhouse/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible host is Ubuntu 18.04. Target is Windows Server 2019 (1809)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create File named "C:\www\new[12].txt" on Windows host and run task below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Fetch test
fetch:
src: C:\www\new[12].txt
dest: test
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
File is retrieved from Windows host
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible aborts with the error below.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Fetch test src=C:\www\new[12].txt, dest=test] *********************************************************************************************************************
Traceback (most recent call last):
File "/home/aniess/.local/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 682, in fetch_file
raise IOError(to_native(result.std_err))
OSError: #< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><S S="Error">Set-StrictMode -Version Latest_x000D__x000A_</S><S S="Error">$path = 'C:\www\new[12].txt'_x000D__x000A_</S><S S="Error">If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">$buffer_size = 524288_x000D__x000A_</S><S S="Error">$offset = 0_x000D__x000A_</S><S S="Error">$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, _x000D__x000A_</S><S S="Error">[IO.FileShare]::ReadWrite)_x000D__x000A_</S><S S="Error">$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null_x000D__x000A_</S><S S="Error">$buffer = New-Object -TypeName byte[] $buffer_size_x000D__x000A_</S><S S="Error">$bytes_read = $stream.Read($buffer, 0, $buffer_size)_x000D__x000A_</S><S S="Error">if ($bytes_read -gt 0) {_x000D__x000A_</S><S S="Error">$bytes = $buffer[0..($bytes_read - 1)]_x000D__x000A_</S><S S="Error">[System.Convert]::ToBase64String($bytes)_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">$stream.Close() > $null_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">ElseIf (Test-Path -Path $path -PathType Container)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Host "[DIR]";_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">Else_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Error "$path does not exist";_x000D__x000A_</S><S S="Error">Exit 1;_x000D__x000A_</S><S S="Error">} : C:\www\new[12].txt does not exist_x000D__x000A_</S><S S="Error"> + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs>
fatal: [www]: FAILED! =>
msg: failed to transfer file to "/home/aniess/ansible/inhouse/test/www/C:/www/new[12].txt"
```
##### FIX
Using -LiteralPath to prevent Test-Path from evaluating wildcards fixes the issue for me:
``` diff
diff --git a/lib/ansible/plugins/connection/winrm.py b/lib/ansible/plugins/connection/winrm.py
index 6ab6ca7bc4..85de3fad2e 100644
--- a/lib/ansible/plugins/connection/winrm.py
+++ b/lib/ansible/plugins/connection/winrm.py
@@ -646,7 +646,7 @@ class Connection(ConnectionBase):
try:
script = '''
$path = "%(path)s"
- If (Test-Path -Path $path -PathType Leaf)
+ If (Test-Path -LiteralPath $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
```
|
https://github.com/ansible/ansible/issues/73128
|
https://github.com/ansible/ansible/pull/74723
|
7ef8e0e102388ae422b214eccffc381deeecadf1
|
b576f0cda7aad938d1eab032608a79a30a6a4968
| 2021-01-06T15:06:41Z |
python
| 2023-05-09T22:58:22Z |
changelogs/fragments/74723-support-wildcard-win_fetch.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,128 |
Module fetch fails on windows host with wildcards in filename
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Using fetch to get files from windows hosts fails if the filename can be interpreted as a powershell wildcard. For example "new[12].txt"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
connection winrm.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /home/aniess/ansible/inhouse/ansible.cfg
configured module search path = ['/home/aniess/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aniess/.local/lib/python3.6/site-packages/ansible
executable location = /home/aniess/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/aniess/ansible/inhouse/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/aniess/ansible/inhouse/ansible.cfg) = ./tmp
DEFAULT_GATHERING(/home/aniess/ansible/inhouse/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/aniess/ansible/inhouse/ansible.cfg) = ['/home/aniess/ansible/inhouse/inventory/inventory.yml']
DEFAULT_STDOUT_CALLBACK(/home/aniess/ansible/inhouse/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/aniess/ansible/inhouse/ansible.cfg) = /home/aniess/ansible/inhouse/pw_vault_prod
DISPLAY_ARGS_TO_STDOUT(/home/aniess/ansible/inhouse/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible host is Ubuntu 18.04. Target is Windows Server 2019 (1809)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create File named "C:\www\new[12].txt" on Windows host and run task below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Fetch test
fetch:
src: C:\www\new[12].txt
dest: test
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
File is retrieved from Windows host
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible aborts with the error below.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Fetch test src=C:\www\new[12].txt, dest=test] *********************************************************************************************************************
Traceback (most recent call last):
File "/home/aniess/.local/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 682, in fetch_file
raise IOError(to_native(result.std_err))
OSError: #< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><S S="Error">Set-StrictMode -Version Latest_x000D__x000A_</S><S S="Error">$path = 'C:\www\new[12].txt'_x000D__x000A_</S><S S="Error">If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">$buffer_size = 524288_x000D__x000A_</S><S S="Error">$offset = 0_x000D__x000A_</S><S S="Error">$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, _x000D__x000A_</S><S S="Error">[IO.FileShare]::ReadWrite)_x000D__x000A_</S><S S="Error">$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null_x000D__x000A_</S><S S="Error">$buffer = New-Object -TypeName byte[] $buffer_size_x000D__x000A_</S><S S="Error">$bytes_read = $stream.Read($buffer, 0, $buffer_size)_x000D__x000A_</S><S S="Error">if ($bytes_read -gt 0) {_x000D__x000A_</S><S S="Error">$bytes = $buffer[0..($bytes_read - 1)]_x000D__x000A_</S><S S="Error">[System.Convert]::ToBase64String($bytes)_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">$stream.Close() > $null_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">ElseIf (Test-Path -Path $path -PathType Container)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Host "[DIR]";_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">Else_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Error "$path does not exist";_x000D__x000A_</S><S S="Error">Exit 1;_x000D__x000A_</S><S S="Error">} : C:\www\new[12].txt does not exist_x000D__x000A_</S><S S="Error"> + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs>
fatal: [www]: FAILED! =>
msg: failed to transfer file to "/home/aniess/ansible/inhouse/test/www/C:/www/new[12].txt"
```
##### FIX
Using -LiteralPath to prevent Test-Path from evaluating wildcards fixes the issue for me:
``` diff
diff --git a/lib/ansible/plugins/connection/winrm.py b/lib/ansible/plugins/connection/winrm.py
index 6ab6ca7bc4..85de3fad2e 100644
--- a/lib/ansible/plugins/connection/winrm.py
+++ b/lib/ansible/plugins/connection/winrm.py
@@ -646,7 +646,7 @@ class Connection(ConnectionBase):
try:
script = '''
$path = "%(path)s"
- If (Test-Path -Path $path -PathType Leaf)
+ If (Test-Path -LiteralPath $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
```
|
https://github.com/ansible/ansible/issues/73128
|
https://github.com/ansible/ansible/pull/74723
|
7ef8e0e102388ae422b214eccffc381deeecadf1
|
b576f0cda7aad938d1eab032608a79a30a6a4968
| 2021-01-06T15:06:41Z |
python
| 2023-05-09T22:58:22Z |
lib/ansible/plugins/connection/winrm.py
|
# (c) 2014, Chris Church <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
author: Ansible Core Team
name: winrm
short_description: Run tasks over Microsoft's WinRM
description:
- Run commands or put/fetch on a target via WinRM
- This plugin allows extra arguments to be passed that are supported by the protocol but not explicitly defined here.
They should take the form of variables declared with the following pattern C(ansible_winrm_<option>).
version_added: "2.0"
extends_documentation_fragment:
- connection_pipelining
requirements:
- pywinrm (python library)
options:
# figure out more elegant 'delegation'
remote_addr:
description:
- Address of the windows machine
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_winrm_host
type: str
remote_user:
description:
- The user to log in as to the Windows machine
vars:
- name: ansible_user
- name: ansible_winrm_user
keyword:
- name: remote_user
type: str
remote_password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_winrm_pass
- name: ansible_winrm_password
type: str
aliases:
- password # Needed for --ask-pass to come through on delegation
port:
description:
- port for winrm to connect on remote target
- The default is the https (5986) port, if using http it should be 5985
vars:
- name: ansible_port
- name: ansible_winrm_port
default: 5986
keyword:
- name: port
type: integer
scheme:
description:
- URI scheme to use
- If not set, then will default to C(https) or C(http) if I(port) is
C(5985).
choices: [http, https]
vars:
- name: ansible_winrm_scheme
type: str
path:
description: URI path to connect to
default: '/wsman'
vars:
- name: ansible_winrm_path
type: str
transport:
description:
- List of winrm transports to attempt to use (ssl, plaintext, kerberos, etc)
- If None (the default) the plugin will try to automatically guess the correct list
- The choices available depend on your version of pywinrm
type: list
elements: string
vars:
- name: ansible_winrm_transport
kerberos_command:
description: kerberos command to use to request a authentication ticket
default: kinit
vars:
- name: ansible_winrm_kinit_cmd
type: str
kinit_args:
description:
- Extra arguments to pass to C(kinit) when getting the Kerberos authentication ticket.
- By default no extra arguments are passed into C(kinit) unless I(ansible_winrm_kerberos_delegation) is also
set. In that case C(-f) is added to the C(kinit) args so a forwardable ticket is retrieved.
- If set, the args will overwrite any existing defaults for C(kinit), including C(-f) for a delegated ticket.
type: str
vars:
- name: ansible_winrm_kinit_args
version_added: '2.11'
kinit_env_vars:
description:
- A list of environment variables to pass through to C(kinit) when getting the Kerberos authentication ticket.
- By default no environment variables are passed through and C(kinit) is run with a blank slate.
- The environment variable C(KRB5CCNAME) cannot be specified here as it's used to store the temp Kerberos
ticket used by WinRM.
type: list
elements: str
default: []
ini:
- section: winrm
key: kinit_env_vars
vars:
- name: ansible_winrm_kinit_env_vars
version_added: '2.12'
kerberos_mode:
description:
- kerberos usage mode.
- The managed option means Ansible will obtain kerberos ticket.
- While the manual one means a ticket must already have been obtained by the user.
- If having issues with Ansible freezing when trying to obtain the
Kerberos ticket, you can either set this to C(manual) and obtain
it outside Ansible or install C(pexpect) through pip and try
again.
choices: [managed, manual]
vars:
- name: ansible_winrm_kinit_mode
type: str
connection_timeout:
description:
- Despite its name, sets both the 'operation' and 'read' timeout settings for the WinRM
connection.
- The operation timeout belongs to the WS-Man layer and runs on the winRM-service on the
managed windows host.
- The read timeout belongs to the underlying python Request call (http-layer) and runs
on the ansible controller.
- The operation timeout sets the WS-Man 'Operation timeout' that runs on the managed
windows host. The operation timeout specifies how long a command will run on the
winRM-service before it sends the message 'WinRMOperationTimeoutError' back to the
client. The client (silently) ignores this message and starts a new instance of the
operation timeout, waiting for the command to finish (long running commands).
- The read timeout sets the client HTTP-request timeout and specifies how long the
client (ansible controller) will wait for data from the server to come back over
the HTTP-connection (timeout for waiting for in-between messages from the server).
When this timer expires, an exception will be thrown and the ansible connection
will be terminated with the error message 'Read timed out'
- To avoid the above exception to be thrown, the read timeout will be set to 10
seconds higher than the WS-Man operation timeout, thus make the connection more
robust on networks with long latency and/or many hops between server and client
network wise.
- Setting the difference bewteen the operation and the read timeout to 10 seconds
alligns it to the defaults used in the winrm-module and the PSRP-module which also
uses 10 seconds (30 seconds for read timeout and 20 seconds for operation timeout)
- Corresponds to the C(operation_timeout_sec) and
C(read_timeout_sec) args in pywinrm so avoid setting these vars
with this one.
- The default value is whatever is set in the installed version of
pywinrm.
vars:
- name: ansible_winrm_connection_timeout
type: int
"""
import base64
import logging
import os
import re
import traceback
import json
import tempfile
import shlex
import subprocess
from inspect import getfullargspec
from urllib.parse import urlunsplit
HAVE_KERBEROS = False
try:
import kerberos # pylint: disable=unused-import
HAVE_KERBEROS = True
except ImportError:
pass
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.errors import AnsibleFileNotFound
from ansible.module_utils.json_utils import _filter_non_json_lines
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.six import binary_type
from ansible.plugins.connection import ConnectionBase
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.hashing import secure_hash
from ansible.utils.display import Display
try:
import winrm
from winrm import Response
from winrm.protocol import Protocol
import requests.exceptions
HAS_WINRM = True
WINRM_IMPORT_ERR = None
except ImportError as e:
HAS_WINRM = False
WINRM_IMPORT_ERR = e
try:
import xmltodict
HAS_XMLTODICT = True
XMLTODICT_IMPORT_ERR = None
except ImportError as e:
HAS_XMLTODICT = False
XMLTODICT_IMPORT_ERR = e
HAS_PEXPECT = False
try:
import pexpect
# echo was added in pexpect 3.3+ which is newer than the RHEL package
# we can only use pexpect for kerb auth if echo is a valid kwarg
# https://github.com/ansible/ansible/issues/43462
if hasattr(pexpect, 'spawn'):
argspec = getfullargspec(pexpect.spawn.__init__)
if 'echo' in argspec.args:
HAS_PEXPECT = True
except ImportError as e:
pass
# used to try and parse the hostname and detect if IPv6 is being used
try:
import ipaddress
HAS_IPADDRESS = True
except ImportError:
HAS_IPADDRESS = False
display = Display()
class Connection(ConnectionBase):
'''WinRM connections over HTTP/HTTPS.'''
transport = 'winrm'
module_implementation_preferences = ('.ps1', '.exe', '')
allow_executable = False
has_pipelining = True
allow_extras = True
def __init__(self, *args, **kwargs):
self.always_pipeline_modules = True
self.has_native_async = True
self.protocol = None
self.shell_id = None
self.delegate = None
self._shell_type = 'powershell'
super(Connection, self).__init__(*args, **kwargs)
if not C.DEFAULT_DEBUG:
logging.getLogger('requests_credssp').setLevel(logging.INFO)
logging.getLogger('requests_kerberos').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
def _build_winrm_kwargs(self):
# this used to be in set_options, as win_reboot needs to be able to
# override the conn timeout, we need to be able to build the args
# after setting individual options. This is called by _connect before
# starting the WinRM connection
self._winrm_host = self.get_option('remote_addr')
self._winrm_user = self.get_option('remote_user')
self._winrm_pass = self.get_option('remote_password')
self._winrm_port = self.get_option('port')
self._winrm_scheme = self.get_option('scheme')
# old behaviour, scheme should default to http if not set and the port
# is 5985 otherwise https
if self._winrm_scheme is None:
self._winrm_scheme = 'http' if self._winrm_port == 5985 else 'https'
self._winrm_path = self.get_option('path')
self._kinit_cmd = self.get_option('kerberos_command')
self._winrm_transport = self.get_option('transport')
self._winrm_connection_timeout = self.get_option('connection_timeout')
if hasattr(winrm, 'FEATURE_SUPPORTED_AUTHTYPES'):
self._winrm_supported_authtypes = set(winrm.FEATURE_SUPPORTED_AUTHTYPES)
else:
# for legacy versions of pywinrm, use the values we know are supported
self._winrm_supported_authtypes = set(['plaintext', 'ssl', 'kerberos'])
# calculate transport if needed
if self._winrm_transport is None or self._winrm_transport[0] is None:
# TODO: figure out what we want to do with auto-transport selection in the face of NTLM/Kerb/CredSSP/Cert/Basic
transport_selector = ['ssl'] if self._winrm_scheme == 'https' else ['plaintext']
if HAVE_KERBEROS and ((self._winrm_user and '@' in self._winrm_user)):
self._winrm_transport = ['kerberos'] + transport_selector
else:
self._winrm_transport = transport_selector
unsupported_transports = set(self._winrm_transport).difference(self._winrm_supported_authtypes)
if unsupported_transports:
raise AnsibleError('The installed version of WinRM does not support transport(s) %s' %
to_native(list(unsupported_transports), nonstring='simplerepr'))
# if kerberos is among our transports and there's a password specified, we're managing the tickets
kinit_mode = self.get_option('kerberos_mode')
if kinit_mode is None:
# HACK: ideally, remove multi-transport stuff
self._kerb_managed = "kerberos" in self._winrm_transport and (self._winrm_pass is not None and self._winrm_pass != "")
elif kinit_mode == "managed":
self._kerb_managed = True
elif kinit_mode == "manual":
self._kerb_managed = False
# arg names we're going passing directly
internal_kwarg_mask = {'self', 'endpoint', 'transport', 'username', 'password', 'scheme', 'path', 'kinit_mode', 'kinit_cmd'}
self._winrm_kwargs = dict(username=self._winrm_user, password=self._winrm_pass)
argspec = getfullargspec(Protocol.__init__)
supported_winrm_args = set(argspec.args)
supported_winrm_args.update(internal_kwarg_mask)
passed_winrm_args = {v.replace('ansible_winrm_', '') for v in self.get_option('_extras')}
unsupported_args = passed_winrm_args.difference(supported_winrm_args)
# warn for kwargs unsupported by the installed version of pywinrm
for arg in unsupported_args:
display.warning("ansible_winrm_{0} unsupported by pywinrm (is an up-to-date version of pywinrm installed?)".format(arg))
# pass through matching extras, excluding the list we want to treat specially
for arg in passed_winrm_args.difference(internal_kwarg_mask).intersection(supported_winrm_args):
self._winrm_kwargs[arg] = self.get_option('_extras')['ansible_winrm_%s' % arg]
# Until pykerberos has enough goodies to implement a rudimentary kinit/klist, simplest way is to let each connection
# auth itself with a private CCACHE.
def _kerb_auth(self, principal, password):
if password is None:
password = ""
self._kerb_ccache = tempfile.NamedTemporaryFile()
display.vvvvv("creating Kerberos CC at %s" % self._kerb_ccache.name)
krb5ccname = "FILE:%s" % self._kerb_ccache.name
os.environ["KRB5CCNAME"] = krb5ccname
krb5env = dict(PATH=os.environ["PATH"], KRB5CCNAME=krb5ccname)
# Add any explicit environment vars into the krb5env block
kinit_env_vars = self.get_option('kinit_env_vars')
for var in kinit_env_vars:
if var not in krb5env and var in os.environ:
krb5env[var] = os.environ[var]
# Stores various flags to call with kinit, these could be explicit args set by 'ansible_winrm_kinit_args' OR
# '-f' if kerberos delegation is requested (ansible_winrm_kerberos_delegation).
kinit_cmdline = [self._kinit_cmd]
kinit_args = self.get_option('kinit_args')
if kinit_args:
kinit_args = [to_text(a) for a in shlex.split(kinit_args) if a.strip()]
kinit_cmdline.extend(kinit_args)
elif boolean(self.get_option('_extras').get('ansible_winrm_kerberos_delegation', False)):
kinit_cmdline.append('-f')
kinit_cmdline.append(principal)
# pexpect runs the process in its own pty so it can correctly send
# the password as input even on MacOS which blocks subprocess from
# doing so. Unfortunately it is not available on the built in Python
# so we can only use it if someone has installed it
if HAS_PEXPECT:
proc_mechanism = "pexpect"
command = kinit_cmdline.pop(0)
password = to_text(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with pexpect for principal %s"
% principal)
try:
child = pexpect.spawn(command, kinit_cmdline, timeout=60,
env=krb5env, echo=False)
except pexpect.ExceptionPexpect as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (command, to_native(err))
raise AnsibleConnectionFailure(err_msg)
try:
child.expect(".*:")
child.sendline(password)
except OSError as err:
# child exited before the pass was sent, Ansible will raise
# error based on the rc below, just display the error here
display.vvvv("kinit with pexpect raised OSError: %s"
% to_native(err))
# technically this is the stdout + stderr but to match the
# subprocess error checking behaviour, we will call it stderr
stderr = child.read()
child.wait()
rc = child.exitstatus
else:
proc_mechanism = "subprocess"
password = to_bytes(password, encoding='utf-8',
errors='surrogate_or_strict')
display.vvvv("calling kinit with subprocess for principal %s"
% principal)
try:
p = subprocess.Popen(kinit_cmdline, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=krb5env)
except OSError as err:
err_msg = "Kerberos auth failure when calling kinit cmd " \
"'%s': %s" % (self._kinit_cmd, to_native(err))
raise AnsibleConnectionFailure(err_msg)
stdout, stderr = p.communicate(password + b'\n')
rc = p.returncode != 0
if rc != 0:
# one last attempt at making sure the password does not exist
# in the output
exp_msg = to_native(stderr.strip())
exp_msg = exp_msg.replace(to_native(password), "<redacted>")
err_msg = "Kerberos auth failure for principal %s with %s: %s" \
% (principal, proc_mechanism, exp_msg)
raise AnsibleConnectionFailure(err_msg)
display.vvvvv("kinit succeeded for principal %s" % principal)
def _winrm_connect(self):
'''
Establish a WinRM connection over HTTP/HTTPS.
'''
display.vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" %
(self._winrm_user, self._winrm_port, self._winrm_host), host=self._winrm_host)
winrm_host = self._winrm_host
if HAS_IPADDRESS:
display.debug("checking if winrm_host %s is an IPv6 address" % winrm_host)
try:
ipaddress.IPv6Address(winrm_host)
except ipaddress.AddressValueError:
pass
else:
winrm_host = "[%s]" % winrm_host
netloc = '%s:%d' % (winrm_host, self._winrm_port)
endpoint = urlunsplit((self._winrm_scheme, netloc, self._winrm_path, '', ''))
errors = []
for transport in self._winrm_transport:
if transport == 'kerberos':
if not HAVE_KERBEROS:
errors.append('kerberos: the python kerberos library is not installed')
continue
if self._kerb_managed:
self._kerb_auth(self._winrm_user, self._winrm_pass)
display.vvvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint), host=self._winrm_host)
try:
winrm_kwargs = self._winrm_kwargs.copy()
if self._winrm_connection_timeout:
winrm_kwargs['operation_timeout_sec'] = self._winrm_connection_timeout
winrm_kwargs['read_timeout_sec'] = self._winrm_connection_timeout + 10
protocol = Protocol(endpoint, transport=transport, **winrm_kwargs)
# open the shell from connect so we know we're able to talk to the server
if not self.shell_id:
self.shell_id = protocol.open_shell(codepage=65001) # UTF-8
display.vvvvv('WINRM OPEN SHELL: %s' % self.shell_id, host=self._winrm_host)
return protocol
except Exception as e:
err_msg = to_text(e).strip()
if re.search(to_text(r'Operation\s+?timed\s+?out'), err_msg, re.I):
raise AnsibleError('the connection attempt timed out')
m = re.search(to_text(r'Code\s+?(\d{3})'), err_msg)
if m:
code = int(m.groups()[0])
if code == 401:
err_msg = 'the specified credentials were rejected by the server'
elif code == 411:
return protocol
errors.append(u'%s: %s' % (transport, err_msg))
display.vvvvv(u'WINRM CONNECTION ERROR: %s\n%s' % (err_msg, to_text(traceback.format_exc())), host=self._winrm_host)
if errors:
raise AnsibleConnectionFailure(', '.join(map(to_native, errors)))
else:
raise AnsibleError('No transport found for WinRM connection')
def _winrm_send_input(self, protocol, shell_id, command_id, stdin, eof=False):
rq = {'env:Envelope': protocol._get_soap_header(
resource_uri='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/cmd',
action='http://schemas.microsoft.com/wbem/wsman/1/windows/shell/Send',
shell_id=shell_id)}
stream = rq['env:Envelope'].setdefault('env:Body', {}).setdefault('rsp:Send', {})\
.setdefault('rsp:Stream', {})
stream['@Name'] = 'stdin'
stream['@CommandId'] = command_id
stream['#text'] = base64.b64encode(to_bytes(stdin))
if eof:
stream['@End'] = 'true'
protocol.send_message(xmltodict.unparse(rq))
def _winrm_exec(self, command, args=(), from_exec=False, stdin_iterator=None):
if not self.protocol:
self.protocol = self._winrm_connect()
self._connected = True
if from_exec:
display.vvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
else:
display.vvvvvv("WINRM EXEC %r %r" % (command, args), host=self._winrm_host)
command_id = None
try:
stdin_push_failed = False
command_id = self.protocol.run_command(self.shell_id, to_bytes(command), map(to_bytes, args), console_mode_stdin=(stdin_iterator is None))
try:
if stdin_iterator:
for (data, is_last) in stdin_iterator:
self._winrm_send_input(self.protocol, self.shell_id, command_id, data, eof=is_last)
except Exception as ex:
display.warning("ERROR DURING WINRM SEND INPUT - attempting to recover: %s %s"
% (type(ex).__name__, to_text(ex)))
display.debug(traceback.format_exc())
stdin_push_failed = True
# NB: this can hang if the receiver is still running (eg, network failed a Send request but the server's still happy).
# FUTURE: Consider adding pywinrm status check/abort operations to see if the target is still running after a failure.
resptuple = self.protocol.get_command_output(self.shell_id, command_id)
# ensure stdout/stderr are text for py3
# FUTURE: this should probably be done internally by pywinrm
response = Response(tuple(to_text(v) if isinstance(v, binary_type) else v for v in resptuple))
# TODO: check result from response and set stdin_push_failed if we have nonzero
if from_exec:
display.vvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
else:
display.vvvvvv('WINRM RESULT %r' % to_text(response), host=self._winrm_host)
display.vvvvvv('WINRM STDOUT %s' % to_text(response.std_out), host=self._winrm_host)
display.vvvvvv('WINRM STDERR %s' % to_text(response.std_err), host=self._winrm_host)
if stdin_push_failed:
# There are cases where the stdin input failed but the WinRM service still processed it. We attempt to
# see if stdout contains a valid json return value so we can ignore this error
try:
filtered_output, dummy = _filter_non_json_lines(response.std_out)
json.loads(filtered_output)
except ValueError:
# stdout does not contain a return response, stdin input was a fatal error
stderr = to_bytes(response.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm send_input failed; \nstdout: %s\nstderr %s'
% (to_native(response.std_out), to_native(stderr)))
return response
except requests.exceptions.Timeout as exc:
raise AnsibleConnectionFailure('winrm connection error: %s' % to_native(exc))
finally:
if command_id:
self.protocol.cleanup_command(self.shell_id, command_id)
def _connect(self):
if not HAS_WINRM:
raise AnsibleError("winrm or requests is not installed: %s" % to_native(WINRM_IMPORT_ERR))
elif not HAS_XMLTODICT:
raise AnsibleError("xmltodict is not installed: %s" % to_native(XMLTODICT_IMPORT_ERR))
super(Connection, self)._connect()
if not self.protocol:
self._build_winrm_kwargs() # build the kwargs from the options set
self.protocol = self._winrm_connect()
self._connected = True
return self
def reset(self):
if not self._connected:
return
self.protocol = None
self.shell_id = None
self._connect()
def _wrapper_payload_stream(self, payload, buffer_size=200000):
payload_bytes = to_bytes(payload)
byte_count = len(payload_bytes)
for i in range(0, byte_count, buffer_size):
yield payload_bytes[i:i + buffer_size], i + buffer_size >= byte_count
def exec_command(self, cmd, in_data=None, sudoable=True):
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
cmd_parts = self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False)
# TODO: display something meaningful here
display.vvv("EXEC (via pipeline wrapper)")
stdin_iterator = None
if in_data:
stdin_iterator = self._wrapper_payload_stream(in_data)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], from_exec=True, stdin_iterator=stdin_iterator)
result.std_out = to_bytes(result.std_out)
result.std_err = to_bytes(result.std_err)
# parse just stderr from CLIXML output
if result.std_err.startswith(b"#< CLIXML"):
try:
result.std_err = _parse_clixml(result.std_err)
except Exception:
# unsure if we're guaranteed a valid xml doc- use raw output in case of error
pass
return (result.status_code, result.std_out, result.std_err)
# FUTURE: determine buffer size at runtime via remote winrm config?
def _put_file_stdin_iterator(self, in_path, out_path, buffer_size=250000):
in_size = os.path.getsize(to_bytes(in_path, errors='surrogate_or_strict'))
offset = 0
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
for out_data in iter((lambda: in_file.read(buffer_size)), b''):
offset += len(out_data)
self._display.vvvvv('WINRM PUT "%s" to "%s" (offset=%d size=%d)' % (in_path, out_path, offset, len(out_data)), host=self._winrm_host)
# yes, we're double-encoding over the wire in this case- we want to ensure that the data shipped to the end PS pipeline is still b64-encoded
b64_data = base64.b64encode(out_data) + b'\r\n'
# cough up the data, as well as an indicator if this is the last chunk so winrm_send knows to set the End signal
yield b64_data, (in_file.tell() == in_size)
if offset == 0: # empty file, return an empty buffer + eof to close it
yield "", True
def put_file(self, in_path, out_path):
super(Connection, self).put_file(in_path, out_path)
out_path = self._shell._unquote(out_path)
display.vvv('PUT "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound('file or module does not exist: "%s"' % to_native(in_path))
script_template = u'''
begin {{
$path = '{0}'
$DebugPreference = "Continue"
$ErrorActionPreference = "Stop"
Set-StrictMode -Version 2
$fd = [System.IO.File]::Create($path)
$sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Create()
$bytes = @() #initialize for empty file case
}}
process {{
$bytes = [System.Convert]::FromBase64String($input)
$sha1.TransformBlock($bytes, 0, $bytes.Length, $bytes, 0) | Out-Null
$fd.Write($bytes, 0, $bytes.Length)
}}
end {{
$sha1.TransformFinalBlock($bytes, 0, 0) | Out-Null
$hash = [System.BitConverter]::ToString($sha1.Hash).Replace("-", "").ToLowerInvariant()
$fd.Close()
Write-Output "{{""sha1"":""$hash""}}"
}}
'''
script = script_template.format(self._shell._escape(out_path))
cmd_parts = self._shell._encode_script(script, as_list=True, strict_mode=False, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:], stdin_iterator=self._put_file_stdin_iterator(in_path, out_path))
# TODO: improve error handling
if result.status_code != 0:
raise AnsibleError(to_native(result.std_err))
try:
put_output = json.loads(result.std_out)
except ValueError:
# stdout does not contain a valid response
stderr = to_bytes(result.std_err, encoding='utf-8')
if stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
raise AnsibleError('winrm put_file failed; \nstdout: %s\nstderr %s' % (to_native(result.std_out), to_native(stderr)))
remote_sha1 = put_output.get("sha1")
if not remote_sha1:
raise AnsibleError("Remote sha1 was not returned")
local_sha1 = secure_hash(in_path)
if not remote_sha1 == local_sha1:
raise AnsibleError("Remote sha1 hash {0} does not match local hash {1}".format(to_native(remote_sha1), to_native(local_sha1)))
def fetch_file(self, in_path, out_path):
super(Connection, self).fetch_file(in_path, out_path)
in_path = self._shell._unquote(in_path)
out_path = out_path.replace('\\', '/')
# consistent with other connection plugins, we assume the caller has created the target dir
display.vvv('FETCH "%s" TO "%s"' % (in_path, out_path), host=self._winrm_host)
buffer_size = 2**19 # 0.5MB chunks
out_file = None
try:
offset = 0
while True:
try:
script = '''
$path = '%(path)s'
If (Test-Path -Path $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite)
$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null
$buffer = New-Object -TypeName byte[] $buffer_size
$bytes_read = $stream.Read($buffer, 0, $buffer_size)
if ($bytes_read -gt 0) {
$bytes = $buffer[0..($bytes_read - 1)]
[System.Convert]::ToBase64String($bytes)
}
$stream.Close() > $null
}
ElseIf (Test-Path -Path $path -PathType Container)
{
Write-Host "[DIR]";
}
Else
{
Write-Error "$path does not exist";
Exit 1;
}
''' % dict(buffer_size=buffer_size, path=self._shell._escape(in_path), offset=offset)
display.vvvvv('WINRM FETCH "%s" to "%s" (offset=%d)' % (in_path, out_path, offset), host=self._winrm_host)
cmd_parts = self._shell._encode_script(script, as_list=True, preserve_rc=False)
result = self._winrm_exec(cmd_parts[0], cmd_parts[1:])
if result.status_code != 0:
raise IOError(to_native(result.std_err))
if result.std_out.strip() == '[DIR]':
data = None
else:
data = base64.b64decode(result.std_out.strip())
if data is None:
break
else:
if not out_file:
# If out_path is a directory and we're expecting a file, bail out now.
if os.path.isdir(to_bytes(out_path, errors='surrogate_or_strict')):
break
out_file = open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb')
out_file.write(data)
if len(data) < buffer_size:
break
offset += len(data)
except Exception:
traceback.print_exc()
raise AnsibleError('failed to transfer file to "%s"' % to_native(out_path))
finally:
if out_file:
out_file.close()
def close(self):
if self.protocol and self.shell_id:
display.vvvvv('WINRM CLOSE SHELL: %s' % self.shell_id, host=self._winrm_host)
self.protocol.close_shell(self.shell_id)
self.shell_id = None
self.protocol = None
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,128 |
Module fetch fails on windows host with wildcards in filename
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Using fetch to get files from windows hosts fails if the filename can be interpreted as a powershell wildcard. For example "new[12].txt"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
connection winrm.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /home/aniess/ansible/inhouse/ansible.cfg
configured module search path = ['/home/aniess/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aniess/.local/lib/python3.6/site-packages/ansible
executable location = /home/aniess/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/aniess/ansible/inhouse/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/aniess/ansible/inhouse/ansible.cfg) = ./tmp
DEFAULT_GATHERING(/home/aniess/ansible/inhouse/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/aniess/ansible/inhouse/ansible.cfg) = ['/home/aniess/ansible/inhouse/inventory/inventory.yml']
DEFAULT_STDOUT_CALLBACK(/home/aniess/ansible/inhouse/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/aniess/ansible/inhouse/ansible.cfg) = /home/aniess/ansible/inhouse/pw_vault_prod
DISPLAY_ARGS_TO_STDOUT(/home/aniess/ansible/inhouse/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible host is Ubuntu 18.04. Target is Windows Server 2019 (1809)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create File named "C:\www\new[12].txt" on Windows host and run task below
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Fetch test
fetch:
src: C:\www\new[12].txt
dest: test
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
File is retrieved from Windows host
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible aborts with the error below.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Fetch test src=C:\www\new[12].txt, dest=test] *********************************************************************************************************************
Traceback (most recent call last):
File "/home/aniess/.local/lib/python3.6/site-packages/ansible/plugins/connection/winrm.py", line 682, in fetch_file
raise IOError(to_native(result.std_err))
OSError: #< CLIXML
<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><S S="Error">Set-StrictMode -Version Latest_x000D__x000A_</S><S S="Error">$path = 'C:\www\new[12].txt'_x000D__x000A_</S><S S="Error">If (Test-Path -Path $path -PathType Leaf)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">$buffer_size = 524288_x000D__x000A_</S><S S="Error">$offset = 0_x000D__x000A_</S><S S="Error">$stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, _x000D__x000A_</S><S S="Error">[IO.FileShare]::ReadWrite)_x000D__x000A_</S><S S="Error">$stream.Seek($offset, [System.IO.SeekOrigin]::Begin) > $null_x000D__x000A_</S><S S="Error">$buffer = New-Object -TypeName byte[] $buffer_size_x000D__x000A_</S><S S="Error">$bytes_read = $stream.Read($buffer, 0, $buffer_size)_x000D__x000A_</S><S S="Error">if ($bytes_read -gt 0) {_x000D__x000A_</S><S S="Error">$bytes = $buffer[0..($bytes_read - 1)]_x000D__x000A_</S><S S="Error">[System.Convert]::ToBase64String($bytes)_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">$stream.Close() > $null_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">ElseIf (Test-Path -Path $path -PathType Container)_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Host "[DIR]";_x000D__x000A_</S><S S="Error">}_x000D__x000A_</S><S S="Error">Else_x000D__x000A_</S><S S="Error">{_x000D__x000A_</S><S S="Error">Write-Error "$path does not exist";_x000D__x000A_</S><S S="Error">Exit 1;_x000D__x000A_</S><S S="Error">} : C:\www\new[12].txt does not exist_x000D__x000A_</S><S S="Error"> + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException_x000D__x000A_</S><S S="Error"> + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException_x000D__x000A_</S><S S="Error"> _x000D__x000A_</S></Objs>
fatal: [www]: FAILED! =>
msg: failed to transfer file to "/home/aniess/ansible/inhouse/test/www/C:/www/new[12].txt"
```
##### FIX
Using -LiteralPath to prevent Test-Path from evaluating wildcards fixes the issue for me:
``` diff
diff --git a/lib/ansible/plugins/connection/winrm.py b/lib/ansible/plugins/connection/winrm.py
index 6ab6ca7bc4..85de3fad2e 100644
--- a/lib/ansible/plugins/connection/winrm.py
+++ b/lib/ansible/plugins/connection/winrm.py
@@ -646,7 +646,7 @@ class Connection(ConnectionBase):
try:
script = '''
$path = "%(path)s"
- If (Test-Path -Path $path -PathType Leaf)
+ If (Test-Path -LiteralPath $path -PathType Leaf)
{
$buffer_size = %(buffer_size)d
$offset = %(offset)d
```
|
https://github.com/ansible/ansible/issues/73128
|
https://github.com/ansible/ansible/pull/74723
|
7ef8e0e102388ae422b214eccffc381deeecadf1
|
b576f0cda7aad938d1eab032608a79a30a6a4968
| 2021-01-06T15:06:41Z |
python
| 2023-05-09T22:58:22Z |
test/integration/targets/win_fetch/tasks/main.yml
|
# test code for the fetch module when using winrm connection
# (c) 2014, Chris Church <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: define host-specific host_output_dir
set_fact:
host_output_dir: "{{ output_dir }}/{{ inventory_hostname }}"
- name: clean out the test directory
file: name={{ host_output_dir|mandatory }} state=absent
delegate_to: localhost
run_once: true
- name: create the test directory
file: name={{ host_output_dir }} state=directory
delegate_to: localhost
run_once: true
- name: fetch a small file
fetch: src="C:/Windows/win.ini" dest={{ host_output_dir }}
register: fetch_small
- name: check fetch small result
assert:
that:
- "fetch_small.changed"
- name: check file created by fetch small
stat: path={{ fetch_small.dest }}
delegate_to: localhost
register: fetch_small_stat
- name: verify fetched small file exists locally
assert:
that:
- "fetch_small_stat.stat.exists"
- "fetch_small_stat.stat.isreg"
- "fetch_small_stat.stat.checksum == fetch_small.checksum"
- name: fetch the same small file
fetch: src="C:/Windows/win.ini" dest={{ host_output_dir }}
register: fetch_small_again
- name: check fetch small result again
assert:
that:
- "not fetch_small_again.changed"
- name: fetch a small file to flat namespace
fetch: src="C:/Windows/win.ini" dest="{{ host_output_dir }}/" flat=yes
register: fetch_flat
- name: check fetch flat result
assert:
that:
- "fetch_flat.changed"
- name: check file created by fetch flat
stat: path="{{ host_output_dir }}/win.ini"
delegate_to: localhost
register: fetch_flat_stat
- name: verify fetched file exists locally in host_output_dir
assert:
that:
- "fetch_flat_stat.stat.exists"
- "fetch_flat_stat.stat.isreg"
- "fetch_flat_stat.stat.checksum == fetch_flat.checksum"
#- name: fetch a small file to flat directory (without trailing slash)
# fetch: src="C:/Windows/win.ini" dest="{{ host_output_dir }}" flat=yes
# register: fetch_flat_dir
#- name: check fetch flat to directory result
# assert:
# that:
# - "fetch_flat_dir is not changed"
- name: fetch a large binary file
fetch: src="C:/Windows/explorer.exe" dest={{ host_output_dir }}
register: fetch_large
- name: check fetch large binary file result
assert:
that:
- "fetch_large.changed"
- name: check file created by fetch large binary
stat: path={{ fetch_large.dest }}
delegate_to: localhost
register: fetch_large_stat
- name: verify fetched large file exists locally
assert:
that:
- "fetch_large_stat.stat.exists"
- "fetch_large_stat.stat.isreg"
- "fetch_large_stat.stat.checksum == fetch_large.checksum"
- name: fetch a large binary file again
fetch: src="C:/Windows/explorer.exe" dest={{ host_output_dir }}
register: fetch_large_again
- name: check fetch large binary file result again
assert:
that:
- "not fetch_large_again.changed"
- name: fetch a small file using backslashes in src path
fetch: src="C:\\Windows\\system.ini" dest={{ host_output_dir }}
register: fetch_small_bs
- name: check fetch small result with backslashes
assert:
that:
- "fetch_small_bs.changed"
- name: check file created by fetch small with backslashes
stat: path={{ fetch_small_bs.dest }}
delegate_to: localhost
register: fetch_small_bs_stat
- name: verify fetched small file with backslashes exists locally
assert:
that:
- "fetch_small_bs_stat.stat.exists"
- "fetch_small_bs_stat.stat.isreg"
- "fetch_small_bs_stat.stat.checksum == fetch_small_bs.checksum"
- name: attempt to fetch a non-existent file - do not fail on missing
fetch: src="C:/this_file_should_not_exist.txt" dest={{ host_output_dir }} fail_on_missing=no
register: fetch_missing_nofail
- name: check fetch missing no fail result
assert:
that:
- "fetch_missing_nofail is not failed"
- "fetch_missing_nofail.msg"
- "fetch_missing_nofail is not changed"
- name: attempt to fetch a non-existent file - fail on missing
fetch: src="~/this_file_should_not_exist.txt" dest={{ host_output_dir }} fail_on_missing=yes
register: fetch_missing
ignore_errors: true
- name: check fetch missing with failure
assert:
that:
- "fetch_missing is failed"
- "fetch_missing.msg"
- "fetch_missing is not changed"
- name: attempt to fetch a non-existent file - fail on missing implicit
fetch: src="~/this_file_should_not_exist.txt" dest={{ host_output_dir }}
register: fetch_missing_implicit
ignore_errors: true
- name: check fetch missing with failure on implicit
assert:
that:
- "fetch_missing_implicit is failed"
- "fetch_missing_implicit.msg"
- "fetch_missing_implicit is not changed"
- name: attempt to fetch a directory
fetch: src="C:\\Windows" dest={{ host_output_dir }}
register: fetch_dir
ignore_errors: true
- name: check fetch directory result
assert:
that:
# Doesn't fail anymore, only returns a message.
- "fetch_dir is not changed"
- "fetch_dir.msg"
- name: create file with special characters
raw: Set-Content -LiteralPath '{{ remote_tmp_dir }}\abc$not var''quote‘‘' -Value 'abc'
- name: fetch file with special characters
fetch:
src: '{{ remote_tmp_dir }}\abc$not var''quote‘'
dest: '{{ host_output_dir }}/'
flat: yes
register: fetch_special_file
- name: get content of fetched file
command: cat {{ (host_output_dir ~ "/abc$not var'quote‘") | quote }}
register: fetch_special_file_actual
delegate_to: localhost
- name: assert fetch file with special characters
assert:
that:
- fetch_special_file is changed
- fetch_special_file.checksum == '34d4150adc3347f1dd8ce19fdf65b74d971ab602'
- fetch_special_file.dest == host_output_dir + "/abc$not var'quote‘"
- fetch_special_file_actual.stdout == 'abc'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,808 |
test_importlib_resources fails with ansible_collections installed
|
### Summary
I'm packaging ansible-core for OpenIndiana. During the construction of 2.15.0 package I noticed the `test_importlib_resources` test fails if there is already ansible/ansible_collections installed.
### Issue Type
Bug Report
### Component Name
test_collection_loader.py
### Ansible Version
```console
2.15.0
```
### Configuration
```console
NA
```
### OS / Environment
OpenIndiana Hipster
### Steps to Reproduce
Run tests in a working directory with `ansible` already installed.
### Expected Results
All tests pass.
### Actual Results
```console
___________________________ test_importlib_resources ___________________________
[gw2] sunos5 -- Python 3.9.16 /usr/bin/python3.9
@pytest.mark.skipif(not PY3, reason='importlib.resources only supported for py3')
def test_importlib_resources():
if sys.version_info < (3, 10):
from importlib_resources import files
else:
from importlib.resources import files
from pathlib import Path
f = get_default_finder()
reset_collections_loader_state(f)
ansible_collections_ns = files('ansible_collections')
ansible_ns = files('ansible_collections.ansible')
testns = files('ansible_collections.testns')
testcoll = files('ansible_collections.testns.testcoll')
testcoll2 = files('ansible_collections.testns.testcoll2')
module_utils = files('ansible_collections.testns.testcoll.plugins.module_utils')
assert isinstance(ansible_collections_ns, _AnsibleNSTraversable)
assert isinstance(ansible_ns, _AnsibleNSTraversable)
assert isinstance(testcoll, Path)
assert isinstance(module_utils, Path)
assert ansible_collections_ns.is_dir()
assert ansible_ns.is_dir()
assert testcoll.is_dir()
assert module_utils.is_dir()
first_path = Path(default_test_collection_paths[0])
second_path = Path(default_test_collection_paths[1])
testns_paths = []
ansible_ns_paths = []
for path in default_test_collection_paths[:2]:
ansible_ns_paths.append(Path(path) / 'ansible_collections' / 'ansible')
testns_paths.append(Path(path) / 'ansible_collections' / 'testns')
assert testns._paths == testns_paths
> assert ansible_ns._paths == ansible_ns_paths
E AssertionError: assert [PosixPath('/...ons/ansible')] == [PosixPath('/...ons/ansible')]
E Left contains one more item: PosixPath('/usr/lib/python3.9/vendor-packages/ansible_collections/ansible')
E Full diff:
E [
E PosixPath('$(BUILD_DIR)/test/units/utils/collection_loader/fixtures/collections/ansible_collections/ansible'),
E PosixPath('$(BUILD_DIR)/test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible'),
E + PosixPath('/usr/lib/python3.9/vendor-packages/ansible_collections/ansible'),
E ]
test/units/utils/collection_loader/test_collection_loader.py:868: AssertionError
```
The package is being build in a working directory - `$(BUILD_DIR)` - while there is already [ansible_collections](https://pypi.org/project/ansible/) and previous version of `ansible-core` (version 2.14.5) installed in `/usr/lib/python3.9/vendor-packages` via the package manager.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80808
|
https://github.com/ansible/ansible/pull/80812
|
c1f2a9ea6c7533d761b8c4bf31397d86d84ed997
|
2ba24957dd373ef191455b34058ba7f65705cfd3
| 2023-05-16T09:03:23Z |
python
| 2023-05-16T17:29:46Z |
test/units/utils/collection_loader/test_collection_loader.py
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pkgutil
import pytest
import re
import sys
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.compat.importlib import import_module
from ansible.modules import ping as ping_module
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import (
_AnsibleCollectionFinder, _AnsibleCollectionLoader, _AnsibleCollectionNSPkgLoader, _AnsibleCollectionPkgLoader,
_AnsibleCollectionPkgLoaderBase, _AnsibleCollectionRootPkgLoader, _AnsibleNSTraversable, _AnsiblePathHookFinder,
_get_collection_name_from_path, _get_collection_role_path, _get_collection_metadata, _iter_modules_impl
)
from ansible.utils.collection_loader._collection_config import _EventSource
from unittest.mock import MagicMock, NonCallableMagicMock, patch
# fixture to ensure we always clean up the import stuff when we're done
@pytest.fixture(autouse=True, scope='function')
def teardown(*args, **kwargs):
yield
reset_collections_loader_state()
# BEGIN STANDALONE TESTS - these exercise behaviors of the individual components without the import machinery
@pytest.mark.skipif(not PY3, reason='Testing Python 2 codepath (find_module) on Python 3')
def test_find_module_py3():
dir_to_a_file = os.path.dirname(ping_module.__file__)
path_hook_finder = _AnsiblePathHookFinder(_AnsibleCollectionFinder(), dir_to_a_file)
# setuptools may fall back to find_module on Python 3 if find_spec returns None
# see https://github.com/pypa/setuptools/pull/2918
assert path_hook_finder.find_spec('missing') is None
assert path_hook_finder.find_module('missing') is None
def test_finder_setup():
# ensure scalar path is listified
f = _AnsibleCollectionFinder(paths='/bogus/bogus')
assert isinstance(f._n_collection_paths, list)
# ensure sys.path paths that have an ansible_collections dir are added to the end of the collections paths
with patch.object(sys, 'path', ['/bogus', default_test_collection_paths[1], '/morebogus', default_test_collection_paths[0]]):
with patch('os.path.isdir', side_effect=lambda x: b'bogus' not in x):
f = _AnsibleCollectionFinder(paths=['/explicit', '/other'])
assert f._n_collection_paths == ['/explicit', '/other', default_test_collection_paths[1], default_test_collection_paths[0]]
configured_paths = ['/bogus']
playbook_paths = ['/playbookdir']
with patch.object(sys, 'path', ['/bogus', '/playbookdir']) and patch('os.path.isdir', side_effect=lambda x: b'bogus' in x):
f = _AnsibleCollectionFinder(paths=configured_paths)
assert f._n_collection_paths == configured_paths
f.set_playbook_paths(playbook_paths)
assert f._n_collection_paths == extend_paths(playbook_paths, 'collections') + configured_paths
# ensure scalar playbook_paths gets listified
f.set_playbook_paths(playbook_paths[0])
assert f._n_collection_paths == extend_paths(playbook_paths, 'collections') + configured_paths
def test_finder_not_interested():
f = get_default_finder()
assert f.find_module('nothanks') is None
assert f.find_module('nothanks.sub', path=['/bogus/dir']) is None
def test_finder_ns():
# ensure we can still load ansible_collections and ansible_collections.ansible when they don't exist on disk
f = _AnsibleCollectionFinder(paths=['/bogus/bogus'])
loader = f.find_module('ansible_collections')
assert isinstance(loader, _AnsibleCollectionRootPkgLoader)
loader = f.find_module('ansible_collections.ansible', path=['/bogus/bogus'])
assert isinstance(loader, _AnsibleCollectionNSPkgLoader)
f = get_default_finder()
loader = f.find_module('ansible_collections')
assert isinstance(loader, _AnsibleCollectionRootPkgLoader)
# path is not allowed for top-level
with pytest.raises(ValueError):
f.find_module('ansible_collections', path=['whatever'])
# path is required for subpackages
with pytest.raises(ValueError):
f.find_module('ansible_collections.whatever', path=None)
paths = [os.path.join(p, 'ansible_collections/nonexistns') for p in default_test_collection_paths]
# test missing
loader = f.find_module('ansible_collections.nonexistns', paths)
assert loader is None
# keep these up top to make sure the loader install/remove are working, since we rely on them heavily in the tests
def test_loader_remove():
fake_mp = [MagicMock(), _AnsibleCollectionFinder(), MagicMock(), _AnsibleCollectionFinder()]
fake_ph = [MagicMock().m1, MagicMock().m2, _AnsibleCollectionFinder()._ansible_collection_path_hook, NonCallableMagicMock]
# must nest until 2.6 compilation is totally donezo
with patch.object(sys, 'meta_path', fake_mp):
with patch.object(sys, 'path_hooks', fake_ph):
_AnsibleCollectionFinder()._remove()
assert len(sys.meta_path) == 2
# no AnsibleCollectionFinders on the meta path after remove is called
assert all((not isinstance(mpf, _AnsibleCollectionFinder) for mpf in sys.meta_path))
assert len(sys.path_hooks) == 3
# none of the remaining path hooks should point at an AnsibleCollectionFinder
assert all((not isinstance(ph.__self__, _AnsibleCollectionFinder) for ph in sys.path_hooks if hasattr(ph, '__self__')))
assert AnsibleCollectionConfig.collection_finder is None
def test_loader_install():
fake_mp = [MagicMock(), _AnsibleCollectionFinder(), MagicMock(), _AnsibleCollectionFinder()]
fake_ph = [MagicMock().m1, MagicMock().m2, _AnsibleCollectionFinder()._ansible_collection_path_hook, NonCallableMagicMock]
# must nest until 2.6 compilation is totally donezo
with patch.object(sys, 'meta_path', fake_mp):
with patch.object(sys, 'path_hooks', fake_ph):
f = _AnsibleCollectionFinder()
f._install()
assert len(sys.meta_path) == 3 # should have removed the existing ACFs and installed a new one
assert sys.meta_path[0] is f # at the front
# the rest of the meta_path should not be AnsibleCollectionFinders
assert all((not isinstance(mpf, _AnsibleCollectionFinder) for mpf in sys.meta_path[1:]))
assert len(sys.path_hooks) == 4 # should have removed the existing ACF path hooks and installed a new one
# the first path hook should be ours, make sure it's pointing at the right instance
assert hasattr(sys.path_hooks[0], '__self__') and sys.path_hooks[0].__self__ is f
# the rest of the path_hooks should not point at an AnsibleCollectionFinder
assert all((not isinstance(ph.__self__, _AnsibleCollectionFinder) for ph in sys.path_hooks[1:] if hasattr(ph, '__self__')))
assert AnsibleCollectionConfig.collection_finder is f
with pytest.raises(ValueError):
AnsibleCollectionConfig.collection_finder = f
def test_finder_coll():
f = get_default_finder()
tests = [
{'name': 'ansible_collections.testns.testcoll', 'test_paths': [default_test_collection_paths]},
{'name': 'ansible_collections.ansible.builtin', 'test_paths': [['/bogus'], default_test_collection_paths]},
]
# ensure finder works for legit paths and bogus paths
for test_dict in tests:
# splat the dict values to our locals
globals().update(test_dict)
parent_pkg = name.rpartition('.')[0]
for paths in test_paths:
paths = [os.path.join(p, parent_pkg.replace('.', '/')) for p in paths]
loader = f.find_module(name, path=paths)
assert isinstance(loader, _AnsibleCollectionPkgLoader)
def test_root_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionRootPkgLoader('not_ansible_collections_toplevel', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionRootPkgLoader('ansible_collections.somens', path_list=['/bogus'])
def test_root_loader():
name = 'ansible_collections'
# ensure this works even when ansible_collections doesn't exist on disk
for paths in [], default_test_collection_paths:
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionRootPkgLoader(name, paths)
assert repr(loader).startswith('_AnsibleCollectionRootPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert module.__path__ == [p for p in extend_paths(paths, name) if os.path.isdir(p)]
# even if the dir exists somewhere, this loader doesn't support get_data, so make __file__ a non-file
assert module.__file__ == '<ansible_synthetic_collection_package>'
assert module.__package__ == name
assert sys.modules.get(name) == module
def test_nspkg_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionNSPkgLoader('not_ansible_collections_toplevel.something', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionNSPkgLoader('ansible_collections.somens.somecoll', path_list=[])
def test_nspkg_loader_load_module():
# ensure the loader behaves on the toplevel and ansible packages for both legit and missing/bogus paths
for name in ['ansible_collections.ansible', 'ansible_collections.testns']:
parent_pkg = name.partition('.')[0]
module_to_load = name.rpartition('.')[2]
paths = extend_paths(default_test_collection_paths, parent_pkg)
existing_child_paths = [p for p in extend_paths(paths, module_to_load) if os.path.exists(p)]
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionNSPkgLoader(name, path_list=paths)
assert repr(loader).startswith('_AnsibleCollectionNSPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert isinstance(module.__loader__, _AnsibleCollectionNSPkgLoader)
assert module.__path__ == existing_child_paths
assert module.__package__ == name
assert module.__file__ == '<ansible_synthetic_collection_package>'
assert sys.modules.get(name) == module
def test_collpkg_loader_not_interested():
with pytest.raises(ImportError):
_AnsibleCollectionPkgLoader('not_ansible_collections', path_list=[])
with pytest.raises(ImportError):
_AnsibleCollectionPkgLoader('ansible_collections.ns', path_list=['/bogus/bogus'])
def test_collpkg_loader_load_module():
reset_collections_loader_state()
with patch('ansible.utils.collection_loader.AnsibleCollectionConfig') as p:
for name in ['ansible_collections.ansible.builtin', 'ansible_collections.testns.testcoll']:
parent_pkg = name.rpartition('.')[0]
module_to_load = name.rpartition('.')[2]
paths = extend_paths(default_test_collection_paths, parent_pkg)
existing_child_paths = [p for p in extend_paths(paths, module_to_load) if os.path.exists(p)]
is_builtin = 'ansible.builtin' in name
if name in sys.modules:
del sys.modules[name]
loader = _AnsibleCollectionPkgLoader(name, path_list=paths)
assert repr(loader).startswith('_AnsibleCollectionPkgLoader(path=')
module = loader.load_module(name)
assert module.__name__ == name
assert isinstance(module.__loader__, _AnsibleCollectionPkgLoader)
if is_builtin:
assert module.__path__ == []
else:
assert module.__path__ == [existing_child_paths[0]]
assert module.__package__ == name
if is_builtin:
assert module.__file__ == '<ansible_synthetic_collection_package>'
else:
assert module.__file__.endswith('__synthetic__') and os.path.isdir(os.path.dirname(module.__file__))
assert sys.modules.get(name) == module
assert hasattr(module, '_collection_meta') and isinstance(module._collection_meta, dict)
# FIXME: validate _collection_meta contents match what's on disk (or not)
# if the module has metadata, try loading it with busted metadata
if module._collection_meta:
_collection_finder = import_module('ansible.utils.collection_loader._collection_finder')
with patch.object(_collection_finder, '_meta_yml_to_dict', side_effect=Exception('bang')):
with pytest.raises(Exception) as ex:
_AnsibleCollectionPkgLoader(name, path_list=paths).load_module(name)
assert 'error parsing collection metadata' in str(ex.value)
def test_coll_loader():
with patch('ansible.utils.collection_loader.AnsibleCollectionConfig'):
with pytest.raises(ValueError):
# not a collection
_AnsibleCollectionLoader('ansible_collections')
with pytest.raises(ValueError):
# bogus paths
_AnsibleCollectionLoader('ansible_collections.testns.testcoll', path_list=[])
# FIXME: more
def test_path_hook_setup():
with patch.object(sys, 'path_hooks', []):
found_hook = None
pathhook_exc = None
try:
found_hook = _AnsiblePathHookFinder._get_filefinder_path_hook()
except Exception as phe:
pathhook_exc = phe
if PY3:
assert str(pathhook_exc) == 'need exactly one FileFinder import hook (found 0)'
else:
assert found_hook is None
assert repr(_AnsiblePathHookFinder(object(), '/bogus/path')) == "_AnsiblePathHookFinder(path='/bogus/path')"
def test_path_hook_importerror():
# ensure that AnsiblePathHookFinder.find_module swallows ImportError from path hook delegation on Py3, eg if the delegated
# path hook gets passed a file on sys.path (python36.zip)
reset_collections_loader_state()
path_to_a_file = os.path.join(default_test_collection_paths[0], 'ansible_collections/testns/testcoll/plugins/action/my_action.py')
# it's a bug if the following pops an ImportError...
assert _AnsiblePathHookFinder(_AnsibleCollectionFinder(), path_to_a_file).find_module('foo.bar.my_action') is None
def test_new_or_existing_module():
module_name = 'blar.test.module'
pkg_name = module_name.rpartition('.')[0]
# create new module case
nuke_module_prefix(module_name)
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name, __package__=pkg_name) as new_module:
# the module we just created should now exist in sys.modules
assert sys.modules.get(module_name) is new_module
assert new_module.__name__ == module_name
# the module should stick since we didn't raise an exception in the contextmgr
assert sys.modules.get(module_name) is new_module
# reuse existing module case
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name, __attr1__=42, blar='yo') as existing_module:
assert sys.modules.get(module_name) is new_module # should be the same module we created earlier
assert hasattr(existing_module, '__package__') and existing_module.__package__ == pkg_name
assert hasattr(existing_module, '__attr1__') and existing_module.__attr1__ == 42
assert hasattr(existing_module, 'blar') and existing_module.blar == 'yo'
# exception during update existing shouldn't zap existing module from sys.modules
with pytest.raises(ValueError) as ve:
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name) as existing_module:
err_to_raise = ValueError('bang')
raise err_to_raise
# make sure we got our error
assert ve.value is err_to_raise
# and that the module still exists
assert sys.modules.get(module_name) is existing_module
# test module removal after exception during creation
nuke_module_prefix(module_name)
with pytest.raises(ValueError) as ve:
with _AnsibleCollectionPkgLoaderBase._new_or_existing_module(module_name) as new_module:
err_to_raise = ValueError('bang')
raise err_to_raise
# make sure we got our error
assert ve.value is err_to_raise
# and that the module was removed
assert sys.modules.get(module_name) is None
def test_iter_modules_impl():
modules_trailer = 'ansible_collections/testns/testcoll/plugins'
modules_pkg_prefix = modules_trailer.replace('/', '.') + '.'
modules_path = os.path.join(default_test_collection_paths[0], modules_trailer)
modules = list(_iter_modules_impl([modules_path], modules_pkg_prefix))
assert modules
assert set([('ansible_collections.testns.testcoll.plugins.action', True),
('ansible_collections.testns.testcoll.plugins.module_utils', True),
('ansible_collections.testns.testcoll.plugins.modules', True)]) == set(modules)
modules_trailer = 'ansible_collections/testns/testcoll/plugins/modules'
modules_pkg_prefix = modules_trailer.replace('/', '.') + '.'
modules_path = os.path.join(default_test_collection_paths[0], modules_trailer)
modules = list(_iter_modules_impl([modules_path], modules_pkg_prefix))
assert modules
assert len(modules) == 1
assert modules[0][0] == 'ansible_collections.testns.testcoll.plugins.modules.amodule' # name
assert modules[0][1] is False # is_pkg
# FIXME: more
# BEGIN IN-CIRCUIT TESTS - these exercise behaviors of the loader when wired up to the import machinery
def test_import_from_collection(monkeypatch):
collection_root = os.path.join(os.path.dirname(__file__), 'fixtures', 'collections')
collection_path = os.path.join(collection_root, 'ansible_collections/testns/testcoll/plugins/module_utils/my_util.py')
# THIS IS UNSTABLE UNDER A DEBUGGER
# the trace we're expecting to be generated when running the code below:
# answer = question()
expected_trace_log = [
(collection_path, 5, 'call'),
(collection_path, 6, 'line'),
(collection_path, 6, 'return'),
]
# define the collection root before any ansible code has been loaded
# otherwise config will have already been loaded and changing the environment will have no effect
monkeypatch.setenv('ANSIBLE_COLLECTIONS_PATH', collection_root)
finder = _AnsibleCollectionFinder(paths=[collection_root])
reset_collections_loader_state(finder)
from ansible_collections.testns.testcoll.plugins.module_utils.my_util import question
original_trace_function = sys.gettrace()
trace_log = []
if original_trace_function:
# enable tracing while preserving the existing trace function (coverage)
def my_trace_function(frame, event, arg):
trace_log.append((frame.f_code.co_filename, frame.f_lineno, event))
# the original trace function expects to have itself set as the trace function
sys.settrace(original_trace_function)
# call the original trace function
original_trace_function(frame, event, arg)
# restore our trace function
sys.settrace(my_trace_function)
return my_trace_function
else:
# no existing trace function, so our trace function is much simpler
def my_trace_function(frame, event, arg):
trace_log.append((frame.f_code.co_filename, frame.f_lineno, event))
return my_trace_function
sys.settrace(my_trace_function)
try:
# run a minimal amount of code while the trace is running
# adding more code here, including use of a context manager, will add more to our trace
answer = question()
finally:
sys.settrace(original_trace_function)
# make sure 'import ... as ...' works on builtin synthetic collections
# the following import is not supported (it tries to find module_utils in ansible.plugins)
# import ansible_collections.ansible.builtin.plugins.module_utils as c1
import ansible_collections.ansible.builtin.plugins.action as c2
import ansible_collections.ansible.builtin.plugins as c3
import ansible_collections.ansible.builtin as c4
import ansible_collections.ansible as c5
import ansible_collections as c6
# make sure 'import ...' works on builtin synthetic collections
import ansible_collections.ansible.builtin.plugins.module_utils
import ansible_collections.ansible.builtin.plugins.action
assert ansible_collections.ansible.builtin.plugins.action == c3.action == c2
import ansible_collections.ansible.builtin.plugins
assert ansible_collections.ansible.builtin.plugins == c4.plugins == c3
import ansible_collections.ansible.builtin
assert ansible_collections.ansible.builtin == c5.builtin == c4
import ansible_collections.ansible
assert ansible_collections.ansible == c6.ansible == c5
import ansible_collections
assert ansible_collections == c6
# make sure 'from ... import ...' works on builtin synthetic collections
from ansible_collections.ansible import builtin
from ansible_collections.ansible.builtin import plugins
assert builtin.plugins == plugins
from ansible_collections.ansible.builtin.plugins import action
from ansible_collections.ansible.builtin.plugins.action import command
assert action.command == command
from ansible_collections.ansible.builtin.plugins.module_utils import basic
from ansible_collections.ansible.builtin.plugins.module_utils.basic import AnsibleModule
assert basic.AnsibleModule == AnsibleModule
# make sure relative imports work from collections code
# these require __package__ to be set correctly
import ansible_collections.testns.testcoll.plugins.module_utils.my_other_util
import ansible_collections.testns.testcoll.plugins.action.my_action
# verify that code loaded from a collection does not inherit __future__ statements from the collection loader
if sys.version_info[0] == 2:
# if the collection code inherits the division future feature from the collection loader this will fail
assert answer == 1
else:
assert answer == 1.5
# verify that the filename and line number reported by the trace is correct
# this makes sure that collection loading preserves file paths and line numbers
assert trace_log == expected_trace_log
def test_eventsource():
es = _EventSource()
# fire when empty should succeed
es.fire(42)
handler1 = MagicMock()
handler2 = MagicMock()
es += handler1
es.fire(99, my_kwarg='blah')
handler1.assert_called_with(99, my_kwarg='blah')
es += handler2
es.fire(123, foo='bar')
handler1.assert_called_with(123, foo='bar')
handler2.assert_called_with(123, foo='bar')
es -= handler2
handler1.reset_mock()
handler2.reset_mock()
es.fire(123, foo='bar')
handler1.assert_called_with(123, foo='bar')
handler2.assert_not_called()
es -= handler1
handler1.reset_mock()
es.fire('blah', kwarg=None)
handler1.assert_not_called()
handler2.assert_not_called()
es -= handler1 # should succeed silently
handler_bang = MagicMock(side_effect=Exception('bang'))
es += handler_bang
with pytest.raises(Exception) as ex:
es.fire(123)
assert 'bang' in str(ex.value)
handler_bang.assert_called_with(123)
with pytest.raises(ValueError):
es += 42
def test_on_collection_load():
finder = get_default_finder()
reset_collections_loader_state(finder)
load_handler = MagicMock()
AnsibleCollectionConfig.on_collection_load += load_handler
m = import_module('ansible_collections.testns.testcoll')
load_handler.assert_called_once_with(collection_name='testns.testcoll', collection_path=os.path.dirname(m.__file__))
_meta = _get_collection_metadata('testns.testcoll')
assert _meta
# FIXME: compare to disk
finder = get_default_finder()
reset_collections_loader_state(finder)
AnsibleCollectionConfig.on_collection_load += MagicMock(side_effect=Exception('bang'))
with pytest.raises(Exception) as ex:
import_module('ansible_collections.testns.testcoll')
assert 'bang' in str(ex.value)
def test_default_collection_config():
finder = get_default_finder()
reset_collections_loader_state(finder)
assert AnsibleCollectionConfig.default_collection is None
AnsibleCollectionConfig.default_collection = 'foo.bar'
assert AnsibleCollectionConfig.default_collection == 'foo.bar'
def test_default_collection_detection():
finder = get_default_finder()
reset_collections_loader_state(finder)
# we're clearly not under a collection path
assert _get_collection_name_from_path('/') is None
# something that looks like a collection path but isn't importable by our finder
assert _get_collection_name_from_path('/foo/ansible_collections/bogusns/boguscoll/bar') is None
# legit, at the top of the collection
live_collection_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections/ansible_collections/testns/testcoll')
assert _get_collection_name_from_path(live_collection_path) == 'testns.testcoll'
# legit, deeper inside the collection
live_collection_deep_path = os.path.join(live_collection_path, 'plugins/modules')
assert _get_collection_name_from_path(live_collection_deep_path) == 'testns.testcoll'
# this one should be hidden by the real testns.testcoll, so should not resolve
masked_collection_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections_masked/ansible_collections/testns/testcoll')
assert _get_collection_name_from_path(masked_collection_path) is None
@pytest.mark.parametrize(
'role_name,collection_list,expected_collection_name,expected_path_suffix',
[
('some_role', ['testns.testcoll', 'ansible.bogus'], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', ['ansible.bogus', 'testns.testcoll'], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', [], 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('testns.testcoll.some_role', None, 'testns.testcoll', 'testns/testcoll/roles/some_role'),
('some_role', [], None, None),
('some_role', None, None, None),
])
def test_collection_role_name_location(role_name, collection_list, expected_collection_name, expected_path_suffix):
finder = get_default_finder()
reset_collections_loader_state(finder)
expected_path = None
if expected_path_suffix:
expected_path = os.path.join(os.path.dirname(__file__), 'fixtures/collections/ansible_collections', expected_path_suffix)
found = _get_collection_role_path(role_name, collection_list)
if found:
assert found[0] == role_name.rpartition('.')[2]
assert found[1] == expected_path
assert found[2] == expected_collection_name
else:
assert expected_collection_name is None and expected_path_suffix is None
def test_bogus_imports():
finder = get_default_finder()
reset_collections_loader_state(finder)
# ensure ImportError on known-bogus imports
bogus_imports = ['bogus_toplevel', 'ansible_collections.bogusns', 'ansible_collections.testns.boguscoll',
'ansible_collections.testns.testcoll.bogussub', 'ansible_collections.ansible.builtin.bogussub']
for bogus_import in bogus_imports:
with pytest.raises(ImportError):
import_module(bogus_import)
def test_empty_vs_no_code():
finder = get_default_finder()
reset_collections_loader_state(finder)
from ansible_collections.testns import testcoll # synthetic package with no code on disk
from ansible_collections.testns.testcoll.plugins import module_utils # real package with empty code file
# ensure synthetic packages have no code object at all (prevent bogus coverage entries)
assert testcoll.__loader__.get_source(testcoll.__name__) is None
assert testcoll.__loader__.get_code(testcoll.__name__) is None
# ensure empty package inits do have a code object
assert module_utils.__loader__.get_source(module_utils.__name__) == b''
assert module_utils.__loader__.get_code(module_utils.__name__) is not None
def test_finder_playbook_paths():
finder = get_default_finder()
reset_collections_loader_state(finder)
import ansible_collections
import ansible_collections.ansible
import ansible_collections.testns
# ensure the package modules look like we expect
assert hasattr(ansible_collections, '__path__') and len(ansible_collections.__path__) > 0
assert hasattr(ansible_collections.ansible, '__path__') and len(ansible_collections.ansible.__path__) > 0
assert hasattr(ansible_collections.testns, '__path__') and len(ansible_collections.testns.__path__) > 0
# these shouldn't be visible yet, since we haven't added the playbook dir
with pytest.raises(ImportError):
import ansible_collections.ansible.playbook_adj_other
with pytest.raises(ImportError):
import ansible_collections.testns.playbook_adj_other
assert AnsibleCollectionConfig.playbook_paths == []
playbook_path_fixture_dir = os.path.join(os.path.dirname(__file__), 'fixtures/playbook_path')
# configure the playbook paths
AnsibleCollectionConfig.playbook_paths = [playbook_path_fixture_dir]
# playbook paths go to the front of the line
assert AnsibleCollectionConfig.collection_paths[0] == os.path.join(playbook_path_fixture_dir, 'collections')
# playbook paths should be updated on the existing root ansible_collections path, as well as on the 'ansible' namespace (but no others!)
assert ansible_collections.__path__[0] == os.path.join(playbook_path_fixture_dir, 'collections/ansible_collections')
assert ansible_collections.ansible.__path__[0] == os.path.join(playbook_path_fixture_dir, 'collections/ansible_collections/ansible')
assert all('playbook_path' not in p for p in ansible_collections.testns.__path__)
# should succeed since we fixed up the package path
import ansible_collections.ansible.playbook_adj_other
# should succeed since we didn't import freshns before hacking in the path
import ansible_collections.freshns.playbook_adj_other
# should fail since we've already imported something from this path and didn't fix up its package path
with pytest.raises(ImportError):
import ansible_collections.testns.playbook_adj_other
def test_toplevel_iter_modules():
finder = get_default_finder()
reset_collections_loader_state(finder)
modules = list(pkgutil.iter_modules(default_test_collection_paths, ''))
assert len(modules) == 1
assert modules[0][1] == 'ansible_collections'
def test_iter_modules_namespaces():
finder = get_default_finder()
reset_collections_loader_state(finder)
paths = extend_paths(default_test_collection_paths, 'ansible_collections')
modules = list(pkgutil.iter_modules(paths, 'ansible_collections.'))
assert len(modules) == 2
assert all(m[2] is True for m in modules)
assert all(isinstance(m[0], _AnsiblePathHookFinder) for m in modules)
assert set(['ansible_collections.testns', 'ansible_collections.ansible']) == set(m[1] for m in modules)
def test_collection_get_data():
finder = get_default_finder()
reset_collections_loader_state(finder)
# something that's there
d = pkgutil.get_data('ansible_collections.testns.testcoll', 'plugins/action/my_action.py')
assert b'hello from my_action.py' in d
# something that's not there
d = pkgutil.get_data('ansible_collections.testns.testcoll', 'bogus/bogus')
assert d is None
with pytest.raises(ValueError):
plugins_pkg = import_module('ansible_collections.ansible.builtin')
assert not os.path.exists(os.path.dirname(plugins_pkg.__file__))
d = pkgutil.get_data('ansible_collections.ansible.builtin', 'plugins/connection/local.py')
@pytest.mark.parametrize(
'ref,ref_type,expected_collection,expected_subdirs,expected_resource,expected_python_pkg_name',
[
('ns.coll.myaction', 'action', 'ns.coll', '', 'myaction', 'ansible_collections.ns.coll.plugins.action'),
('ns.coll.subdir1.subdir2.myaction', 'action', 'ns.coll', 'subdir1.subdir2', 'myaction', 'ansible_collections.ns.coll.plugins.action.subdir1.subdir2'),
('ns.coll.myrole', 'role', 'ns.coll', '', 'myrole', 'ansible_collections.ns.coll.roles.myrole'),
('ns.coll.subdir1.subdir2.myrole', 'role', 'ns.coll', 'subdir1.subdir2', 'myrole', 'ansible_collections.ns.coll.roles.subdir1.subdir2.myrole'),
])
def test_fqcr_parsing_valid(ref, ref_type, expected_collection,
expected_subdirs, expected_resource, expected_python_pkg_name):
assert AnsibleCollectionRef.is_valid_fqcr(ref, ref_type)
r = AnsibleCollectionRef.from_fqcr(ref, ref_type)
assert r.collection == expected_collection
assert r.subdirs == expected_subdirs
assert r.resource == expected_resource
assert r.n_python_package_name == expected_python_pkg_name
r = AnsibleCollectionRef.try_parse_fqcr(ref, ref_type)
assert r.collection == expected_collection
assert r.subdirs == expected_subdirs
assert r.resource == expected_resource
assert r.n_python_package_name == expected_python_pkg_name
@pytest.mark.parametrize(
('fqcn', 'expected'),
(
('ns1.coll2', True),
('ns1#coll2', False),
('def.coll3', False),
('ns4.return', False),
('assert.this', False),
('import.that', False),
('.that', False),
('this.', False),
('.', False),
('', False),
),
)
def test_fqcn_validation(fqcn, expected):
"""Vefiry that is_valid_collection_name validates FQCN correctly."""
assert AnsibleCollectionRef.is_valid_collection_name(fqcn) is expected
@pytest.mark.parametrize(
'ref,ref_type,expected_error_type,expected_error_expression',
[
('no_dots_at_all_action', 'action', ValueError, 'is not a valid collection reference'),
('no_nscoll.myaction', 'action', ValueError, 'is not a valid collection reference'),
('no_nscoll%myaction', 'action', ValueError, 'is not a valid collection reference'),
('ns.coll.myaction', 'bogus', ValueError, 'invalid collection ref_type'),
])
def test_fqcr_parsing_invalid(ref, ref_type, expected_error_type, expected_error_expression):
assert not AnsibleCollectionRef.is_valid_fqcr(ref, ref_type)
with pytest.raises(expected_error_type) as curerr:
AnsibleCollectionRef.from_fqcr(ref, ref_type)
assert re.search(expected_error_expression, str(curerr.value))
r = AnsibleCollectionRef.try_parse_fqcr(ref, ref_type)
assert r is None
@pytest.mark.parametrize(
'name,subdirs,resource,ref_type,python_pkg_name',
[
('ns.coll', None, 'res', 'doc_fragments', 'ansible_collections.ns.coll.plugins.doc_fragments'),
('ns.coll', 'subdir1', 'res', 'doc_fragments', 'ansible_collections.ns.coll.plugins.doc_fragments.subdir1'),
('ns.coll', 'subdir1.subdir2', 'res', 'action', 'ansible_collections.ns.coll.plugins.action.subdir1.subdir2'),
])
def test_collectionref_components_valid(name, subdirs, resource, ref_type, python_pkg_name):
x = AnsibleCollectionRef(name, subdirs, resource, ref_type)
assert x.collection == name
if subdirs:
assert x.subdirs == subdirs
else:
assert x.subdirs == ''
assert x.resource == resource
assert x.ref_type == ref_type
assert x.n_python_package_name == python_pkg_name
@pytest.mark.parametrize(
'dirname,expected_result',
[
('become_plugins', 'become'),
('cache_plugins', 'cache'),
('connection_plugins', 'connection'),
('library', 'modules'),
('filter_plugins', 'filter'),
('bogus_plugins', ValueError),
(None, ValueError)
]
)
def test_legacy_plugin_dir_to_plugin_type(dirname, expected_result):
if isinstance(expected_result, string_types):
assert AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(dirname) == expected_result
else:
with pytest.raises(expected_result):
AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(dirname)
@pytest.mark.parametrize(
'name,subdirs,resource,ref_type,expected_error_type,expected_error_expression',
[
('bad_ns', '', 'resource', 'action', ValueError, 'invalid collection name'),
('ns.coll.', '', 'resource', 'action', ValueError, 'invalid collection name'),
('ns.coll', 'badsubdir#', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', 'badsubdir.', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', '.badsubdir', 'resource', 'action', ValueError, 'invalid subdirs entry'),
('ns.coll', '', 'resource', 'bogus', ValueError, 'invalid collection ref_type'),
])
def test_collectionref_components_invalid(name, subdirs, resource, ref_type, expected_error_type, expected_error_expression):
with pytest.raises(expected_error_type) as curerr:
AnsibleCollectionRef(name, subdirs, resource, ref_type)
assert re.search(expected_error_expression, str(curerr.value))
@pytest.mark.skipif(not PY3, reason='importlib.resources only supported for py3')
def test_importlib_resources():
if sys.version_info < (3, 10):
from importlib_resources import files
else:
from importlib.resources import files
from pathlib import Path
f = get_default_finder()
reset_collections_loader_state(f)
ansible_collections_ns = files('ansible_collections')
ansible_ns = files('ansible_collections.ansible')
testns = files('ansible_collections.testns')
testcoll = files('ansible_collections.testns.testcoll')
testcoll2 = files('ansible_collections.testns.testcoll2')
module_utils = files('ansible_collections.testns.testcoll.plugins.module_utils')
assert isinstance(ansible_collections_ns, _AnsibleNSTraversable)
assert isinstance(ansible_ns, _AnsibleNSTraversable)
assert isinstance(testcoll, Path)
assert isinstance(module_utils, Path)
assert ansible_collections_ns.is_dir()
assert ansible_ns.is_dir()
assert testcoll.is_dir()
assert module_utils.is_dir()
first_path = Path(default_test_collection_paths[0])
second_path = Path(default_test_collection_paths[1])
testns_paths = []
ansible_ns_paths = []
for path in default_test_collection_paths[:2]:
ansible_ns_paths.append(Path(path) / 'ansible_collections' / 'ansible')
testns_paths.append(Path(path) / 'ansible_collections' / 'testns')
assert testns._paths == testns_paths
assert ansible_ns._paths == ansible_ns_paths
assert ansible_collections_ns._paths == [Path(p) / 'ansible_collections' for p in default_test_collection_paths[:2]]
assert testcoll2 == second_path / 'ansible_collections' / 'testns' / 'testcoll2'
assert {p.name for p in module_utils.glob('*.py')} == {'__init__.py', 'my_other_util.py', 'my_util.py'}
nestcoll_mu_init = first_path / 'ansible_collections' / 'testns' / 'testcoll' / 'plugins' / 'module_utils' / '__init__.py'
assert next(module_utils.glob('__init__.py')) == nestcoll_mu_init
# BEGIN TEST SUPPORT
default_test_collection_paths = [
os.path.join(os.path.dirname(__file__), 'fixtures', 'collections'),
os.path.join(os.path.dirname(__file__), 'fixtures', 'collections_masked'),
'/bogus/bogussub'
]
def get_default_finder():
return _AnsibleCollectionFinder(paths=default_test_collection_paths)
def extend_paths(path_list, suffix):
suffix = suffix.replace('.', '/')
return [os.path.join(p, suffix) for p in path_list]
def nuke_module_prefix(prefix):
for module_to_nuke in [m for m in sys.modules if m.startswith(prefix)]:
sys.modules.pop(module_to_nuke)
def reset_collections_loader_state(metapath_finder=None):
_AnsibleCollectionFinder._remove()
nuke_module_prefix('ansible_collections')
nuke_module_prefix('ansible.modules')
nuke_module_prefix('ansible.plugins')
# FIXME: better to move this someplace else that gets cleaned up automatically?
_AnsibleCollectionLoader._redirected_package_map = {}
AnsibleCollectionConfig._default_collection = None
AnsibleCollectionConfig._on_collection_load = _EventSource()
if metapath_finder:
metapath_finder._install()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,756 |
ansible-test --docker: FileNotFoundError
|
### Summary
When I run sanity/unit/integration tests I get:
```
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-92moe40p/settings.dat'
```
### Issue Type
Bug Report
### Component Name
ansible-test sanity/unit/integration
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.16.0.dev0]
config file = None
configured module search path = ['/home/bbarbach/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/bbarbach/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (/usr/bin/python3.9)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test units --docker -v yum
```
### Expected Results
I expected the unit tests for the yum module to be run
### Actual Results
```console
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-jnbe6ait/settings.dat'
FATAL: Command "docker exec ansible-test-controller-8LbyW7uq /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.11 /root/ansible/bin/ansible-test units -v yum --containers '{}' --requirements-mode only --truncate 211 --color yes --host-path test/results/.tmp/host-jnbe6ait --metadata test/results/.tmp/metadata-e6w9ymfb.json" returned exit status 1.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80756
|
https://github.com/ansible/ansible/pull/80801
|
2fd64161c1ae4a8930e9b6094804ac9976a9f2ad
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
| 2023-05-10T18:09:44Z |
python
| 2023-05-17T16:07:04Z |
changelogs/fragments/ansible-test-source-detection.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,756 |
ansible-test --docker: FileNotFoundError
|
### Summary
When I run sanity/unit/integration tests I get:
```
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-92moe40p/settings.dat'
```
### Issue Type
Bug Report
### Component Name
ansible-test sanity/unit/integration
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.16.0.dev0]
config file = None
configured module search path = ['/home/bbarbach/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/bbarbach/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (/usr/bin/python3.9)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test units --docker -v yum
```
### Expected Results
I expected the unit tests for the yum module to be run
### Actual Results
```console
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-jnbe6ait/settings.dat'
FATAL: Command "docker exec ansible-test-controller-8LbyW7uq /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.11 /root/ansible/bin/ansible-test units -v yum --containers '{}' --requirements-mode only --truncate 211 --color yes --host-path test/results/.tmp/host-jnbe6ait --metadata test/results/.tmp/metadata-e6w9ymfb.json" returned exit status 1.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80756
|
https://github.com/ansible/ansible/pull/80801
|
2fd64161c1ae4a8930e9b6094804ac9976a9f2ad
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
| 2023-05-10T18:09:44Z |
python
| 2023-05-17T16:07:04Z |
test/lib/ansible_test/_internal/data.py
|
"""Context information for the current invocation of ansible-test."""
from __future__ import annotations
import collections.abc as c
import dataclasses
import os
import typing as t
from .util import (
ApplicationError,
import_plugins,
is_subdir,
is_valid_identifier,
ANSIBLE_LIB_ROOT,
ANSIBLE_TEST_ROOT,
ANSIBLE_SOURCE_ROOT,
display,
cache,
)
from .provider import (
find_path_provider,
get_path_provider_classes,
ProviderNotFoundForPath,
)
from .provider.source import (
SourceProvider,
)
from .provider.source.unversioned import (
UnversionedSource,
)
from .provider.source.installed import (
InstalledSource,
)
from .provider.source.unsupported import (
UnsupportedSource,
)
from .provider.layout import (
ContentLayout,
LayoutProvider,
)
from .provider.layout.unsupported import (
UnsupportedLayout,
)
@dataclasses.dataclass(frozen=True)
class PayloadConfig:
"""Configuration required to build a source tree payload for delegation."""
files: list[tuple[str, str]]
permissions: dict[str, int]
class DataContext:
"""Data context providing details about the current execution environment for ansible-test."""
def __init__(self) -> None:
content_path = os.environ.get('ANSIBLE_TEST_CONTENT_ROOT')
current_path = os.getcwd()
layout_providers = get_path_provider_classes(LayoutProvider)
source_providers = get_path_provider_classes(SourceProvider)
self.__layout_providers = layout_providers
self.__source_providers = source_providers
self.__ansible_source: t.Optional[tuple[tuple[str, str], ...]] = None
self.payload_callbacks: list[c.Callable[[PayloadConfig], None]] = []
if content_path:
content, source_provider = self.__create_content_layout(layout_providers, source_providers, content_path, False)
elif ANSIBLE_SOURCE_ROOT and is_subdir(current_path, ANSIBLE_SOURCE_ROOT):
content, source_provider = self.__create_content_layout(layout_providers, source_providers, ANSIBLE_SOURCE_ROOT, False)
else:
content, source_provider = self.__create_content_layout(layout_providers, source_providers, current_path, True)
self.content: ContentLayout = content
self.source_provider = source_provider
def create_collection_layouts(self) -> list[ContentLayout]:
"""
Return a list of collection layouts, one for each collection in the same collection root as the current collection layout.
An empty list is returned if the current content layout is not a collection layout.
"""
layout = self.content
collection = layout.collection
if not collection:
return []
root_path = os.path.join(collection.root, 'ansible_collections')
display.info('Scanning collection root: %s' % root_path, verbosity=1)
namespace_names = sorted(name for name in os.listdir(root_path) if os.path.isdir(os.path.join(root_path, name)))
collections = []
for namespace_name in namespace_names:
namespace_path = os.path.join(root_path, namespace_name)
collection_names = sorted(name for name in os.listdir(namespace_path) if os.path.isdir(os.path.join(namespace_path, name)))
for collection_name in collection_names:
collection_path = os.path.join(namespace_path, collection_name)
if collection_path == os.path.join(collection.root, collection.directory):
collection_layout = layout
else:
collection_layout = self.__create_content_layout(self.__layout_providers, self.__source_providers, collection_path, False)[0]
file_count = len(collection_layout.all_files())
if not file_count:
continue
display.info('Including collection: %s (%d files)' % (collection_layout.collection.full_name, file_count), verbosity=1)
collections.append(collection_layout)
return collections
@staticmethod
def __create_content_layout(
layout_providers: list[t.Type[LayoutProvider]],
source_providers: list[t.Type[SourceProvider]],
root: str,
walk: bool,
) -> t.Tuple[ContentLayout, SourceProvider]:
"""Create a content layout using the given providers and root path."""
try:
layout_provider = find_path_provider(LayoutProvider, layout_providers, root, walk)
except ProviderNotFoundForPath:
layout_provider = UnsupportedLayout(root)
try:
# Begin the search for the source provider at the layout provider root.
# This intentionally ignores version control within subdirectories of the layout root, a condition which was previously an error.
# Doing so allows support for older git versions for which it is difficult to distinguish between a super project and a sub project.
# It also provides a better user experience, since the solution for the user would effectively be the same -- to remove the nested version control.
if isinstance(layout_provider, UnsupportedLayout):
source_provider: SourceProvider = UnsupportedSource(layout_provider.root)
else:
source_provider = find_path_provider(SourceProvider, source_providers, layout_provider.root, walk)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(layout_provider.root)
layout = layout_provider.create(layout_provider.root, source_provider.get_paths(layout_provider.root))
return layout, source_provider
def __create_ansible_source(self):
"""Return a tuple of Ansible source files with both absolute and relative paths."""
if not ANSIBLE_SOURCE_ROOT:
sources = []
source_provider = InstalledSource(ANSIBLE_LIB_ROOT)
sources.extend((os.path.join(source_provider.root, path), os.path.join('lib', 'ansible', path))
for path in source_provider.get_paths(source_provider.root))
source_provider = InstalledSource(ANSIBLE_TEST_ROOT)
sources.extend((os.path.join(source_provider.root, path), os.path.join('test', 'lib', 'ansible_test', path))
for path in source_provider.get_paths(source_provider.root))
return tuple(sources)
if self.content.is_ansible:
return tuple((os.path.join(self.content.root, path), path) for path in self.content.all_files())
try:
source_provider = find_path_provider(SourceProvider, self.__source_providers, ANSIBLE_SOURCE_ROOT, False)
except ProviderNotFoundForPath:
source_provider = UnversionedSource(ANSIBLE_SOURCE_ROOT)
return tuple((os.path.join(source_provider.root, path), path) for path in source_provider.get_paths(source_provider.root))
@property
def ansible_source(self) -> tuple[tuple[str, str], ...]:
"""Return a tuple of Ansible source files with both absolute and relative paths."""
if not self.__ansible_source:
self.__ansible_source = self.__create_ansible_source()
return self.__ansible_source
def register_payload_callback(self, callback: c.Callable[[PayloadConfig], None]) -> None:
"""Register the given payload callback."""
self.payload_callbacks.append(callback)
def check_layout(self) -> None:
"""Report an error if the layout is unsupported."""
if self.content.unsupported:
raise ApplicationError(self.explain_working_directory())
def explain_working_directory(self) -> str:
"""Return a message explaining the working directory requirements."""
blocks = [
'The current working directory must be within the source tree being tested.',
'',
]
if ANSIBLE_SOURCE_ROOT:
blocks.append(f'Testing Ansible: {ANSIBLE_SOURCE_ROOT}/')
blocks.append('')
cwd = os.getcwd()
blocks.append('Testing an Ansible collection: {...}/ansible_collections/{namespace}/{collection}/')
blocks.append('Example #1: community.general -> ~/code/ansible_collections/community/general/')
blocks.append('Example #2: ansible.util -> ~/.ansible/collections/ansible_collections/ansible/util/')
blocks.append('')
blocks.append(f'Current working directory: {cwd}/')
if os.path.basename(os.path.dirname(cwd)) == 'ansible_collections':
blocks.append(f'Expected parent directory: {os.path.dirname(cwd)}/{{namespace}}/{{collection}}/')
elif os.path.basename(cwd) == 'ansible_collections':
blocks.append(f'Expected parent directory: {cwd}/{{namespace}}/{{collection}}/')
elif 'ansible_collections' not in cwd.split(os.path.sep):
blocks.append('No "ansible_collections" parent directory was found.')
if self.content.collection:
if not is_valid_identifier(self.content.collection.namespace):
blocks.append(f'The namespace "{self.content.collection.namespace}" is an invalid identifier or a reserved keyword.')
if not is_valid_identifier(self.content.collection.name):
blocks.append(f'The name "{self.content.collection.name}" is an invalid identifier or a reserved keyword.')
message = '\n'.join(blocks)
return message
@cache
def data_context() -> DataContext:
"""Initialize provider plugins."""
provider_types = (
'layout',
'source',
)
for provider_type in provider_types:
import_plugins('provider/%s' % provider_type)
context = DataContext()
return context
@dataclasses.dataclass(frozen=True)
class PluginInfo:
"""Information about an Ansible plugin."""
plugin_type: str
name: str
paths: list[str]
@cache
def content_plugins() -> dict[str, dict[str, PluginInfo]]:
"""
Analyze content.
The primary purpose of this analysis is to facilitate mapping of integration tests to the plugin(s) they are intended to test.
"""
plugins: dict[str, dict[str, PluginInfo]] = {}
for plugin_type, plugin_directory in data_context().content.plugin_paths.items():
plugin_paths = sorted(data_context().content.walk_files(plugin_directory))
plugin_directory_offset = len(plugin_directory.split(os.path.sep))
plugin_files: dict[str, list[str]] = {}
for plugin_path in plugin_paths:
plugin_filename = os.path.basename(plugin_path)
plugin_parts = plugin_path.split(os.path.sep)[plugin_directory_offset:-1]
if plugin_filename == '__init__.py':
if plugin_type != 'module_utils':
continue
else:
plugin_name = os.path.splitext(plugin_filename)[0]
if data_context().content.is_ansible and plugin_type == 'modules':
plugin_name = plugin_name.lstrip('_')
plugin_parts.append(plugin_name)
plugin_name = '.'.join(plugin_parts)
plugin_files.setdefault(plugin_name, []).append(plugin_filename)
plugins[plugin_type] = {plugin_name: PluginInfo(
plugin_type=plugin_type,
name=plugin_name,
paths=paths,
) for plugin_name, paths in plugin_files.items()}
return plugins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,756 |
ansible-test --docker: FileNotFoundError
|
### Summary
When I run sanity/unit/integration tests I get:
```
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-92moe40p/settings.dat'
```
### Issue Type
Bug Report
### Component Name
ansible-test sanity/unit/integration
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.16.0.dev0]
config file = None
configured module search path = ['/home/bbarbach/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/bbarbach/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (/usr/bin/python3.9)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test units --docker -v yum
```
### Expected Results
I expected the unit tests for the yum module to be run
### Actual Results
```console
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-jnbe6ait/settings.dat'
FATAL: Command "docker exec ansible-test-controller-8LbyW7uq /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.11 /root/ansible/bin/ansible-test units -v yum --containers '{}' --requirements-mode only --truncate 211 --color yes --host-path test/results/.tmp/host-jnbe6ait --metadata test/results/.tmp/metadata-e6w9ymfb.json" returned exit status 1.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80756
|
https://github.com/ansible/ansible/pull/80801
|
2fd64161c1ae4a8930e9b6094804ac9976a9f2ad
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
| 2023-05-10T18:09:44Z |
python
| 2023-05-17T16:07:04Z |
test/lib/ansible_test/_internal/provider/layout/__init__.py
|
"""Code for finding content."""
from __future__ import annotations
import abc
import collections
import os
import typing as t
from ...util import (
ANSIBLE_SOURCE_ROOT,
)
from .. import (
PathProvider,
)
class Layout:
"""Description of content locations and helper methods to access content."""
def __init__(
self,
root: str,
paths: list[str],
) -> None:
self.root = root
self.__paths = paths # contains both file paths and symlinked directory paths (ending with os.path.sep)
self.__files = [path for path in paths if not path.endswith(os.path.sep)] # contains only file paths
self.__paths_tree = paths_to_tree(self.__paths)
self.__files_tree = paths_to_tree(self.__files)
def all_files(self, include_symlinked_directories: bool = False) -> list[str]:
"""Return a list of all file paths."""
if include_symlinked_directories:
return self.__paths
return self.__files
def walk_files(self, directory: str, include_symlinked_directories: bool = False) -> list[str]:
"""Return a list of file paths found recursively under the given directory."""
if include_symlinked_directories:
tree = self.__paths_tree
else:
tree = self.__files_tree
parts = directory.rstrip(os.path.sep).split(os.path.sep)
item = get_tree_item(tree, parts)
if not item:
return []
directories = collections.deque(item[0].values())
files = list(item[1])
while directories:
item = directories.pop()
directories.extend(item[0].values())
files.extend(item[1])
return files
def get_dirs(self, directory: str) -> list[str]:
"""Return a list directory paths found directly under the given directory."""
parts = directory.rstrip(os.path.sep).split(os.path.sep)
item = get_tree_item(self.__files_tree, parts)
return [os.path.join(directory, key) for key in item[0].keys()] if item else []
def get_files(self, directory: str) -> list[str]:
"""Return a list of file paths found directly under the given directory."""
parts = directory.rstrip(os.path.sep).split(os.path.sep)
item = get_tree_item(self.__files_tree, parts)
return item[1] if item else []
class ContentLayout(Layout):
"""Information about the current Ansible content being tested."""
def __init__(
self,
root: str,
paths: list[str],
plugin_paths: dict[str, str],
collection: t.Optional[CollectionDetail],
test_path: str,
results_path: str,
sanity_path: str,
sanity_messages: t.Optional[LayoutMessages],
integration_path: str,
integration_targets_path: str,
integration_vars_path: str,
integration_messages: t.Optional[LayoutMessages],
unit_path: str,
unit_module_path: str,
unit_module_utils_path: str,
unit_messages: t.Optional[LayoutMessages],
unsupported: bool = False,
) -> None:
super().__init__(root, paths)
self.plugin_paths = plugin_paths
self.collection = collection
self.test_path = test_path
self.results_path = results_path
self.sanity_path = sanity_path
self.sanity_messages = sanity_messages
self.integration_path = integration_path
self.integration_targets_path = integration_targets_path
self.integration_vars_path = integration_vars_path
self.integration_messages = integration_messages
self.unit_path = unit_path
self.unit_module_path = unit_module_path
self.unit_module_utils_path = unit_module_utils_path
self.unit_messages = unit_messages
self.unsupported = unsupported
self.is_ansible = root == ANSIBLE_SOURCE_ROOT
@property
def prefix(self) -> str:
"""Return the collection prefix or an empty string if not a collection."""
if self.collection:
return self.collection.prefix
return ''
@property
def module_path(self) -> t.Optional[str]:
"""Return the path where modules are found, if any."""
return self.plugin_paths.get('modules')
@property
def module_utils_path(self) -> t.Optional[str]:
"""Return the path where module_utils are found, if any."""
return self.plugin_paths.get('module_utils')
@property
def module_utils_powershell_path(self) -> t.Optional[str]:
"""Return the path where powershell module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'powershell')
return self.plugin_paths.get('module_utils')
@property
def module_utils_csharp_path(self) -> t.Optional[str]:
"""Return the path where csharp module_utils are found, if any."""
if self.is_ansible:
return os.path.join(self.plugin_paths['module_utils'], 'csharp')
return self.plugin_paths.get('module_utils')
class LayoutMessages:
"""Messages generated during layout creation that should be deferred for later display."""
def __init__(self) -> None:
self.info: list[str] = []
self.warning: list[str] = []
self.error: list[str] = []
class CollectionDetail:
"""Details about the layout of the current collection."""
def __init__(
self,
name: str,
namespace: str,
root: str,
) -> None:
self.name = name
self.namespace = namespace
self.root = root
self.full_name = '%s.%s' % (namespace, name)
self.prefix = '%s.' % self.full_name
self.directory = os.path.join('ansible_collections', namespace, name)
class LayoutProvider(PathProvider):
"""Base class for layout providers."""
PLUGIN_TYPES = (
'action',
'become',
'cache',
'callback',
'cliconf',
'connection',
'doc_fragments',
'filter',
'httpapi',
'inventory',
'lookup',
'module_utils',
'modules',
'netconf',
'shell',
'strategy',
'terminal',
'test',
'vars',
# The following are plugin directories not directly supported by ansible-core, but used in collections
# (https://github.com/ansible-collections/overview/blob/main/collection_requirements.rst#modules--plugins)
'plugin_utils',
'sub_plugins',
)
@abc.abstractmethod
def create(self, root: str, paths: list[str]) -> ContentLayout:
"""Create a layout using the given root and paths."""
def paths_to_tree(paths: list[str]) -> tuple[dict[str, t.Any], list[str]]:
"""Return a filesystem tree from the given list of paths."""
tree: tuple[dict[str, t.Any], list[str]] = {}, []
for path in paths:
parts = path.split(os.path.sep)
root = tree
for part in parts[:-1]:
if part not in root[0]:
root[0][part] = {}, []
root = root[0][part]
root[1].append(path)
return tree
def get_tree_item(tree: tuple[dict[str, t.Any], list[str]], parts: list[str]) -> t.Optional[tuple[dict[str, t.Any], list[str]]]:
"""Return the portion of the tree found under the path given by parts, or None if it does not exist."""
root = tree
for part in parts:
root = root[0].get(part)
if not root:
return None
return root
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,756 |
ansible-test --docker: FileNotFoundError
|
### Summary
When I run sanity/unit/integration tests I get:
```
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-92moe40p/settings.dat'
```
### Issue Type
Bug Report
### Component Name
ansible-test sanity/unit/integration
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.16.0.dev0]
config file = None
configured module search path = ['/home/bbarbach/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/bbarbach/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (/usr/bin/python3.9)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test units --docker -v yum
```
### Expected Results
I expected the unit tests for the yum module to be run
### Actual Results
```console
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-jnbe6ait/settings.dat'
FATAL: Command "docker exec ansible-test-controller-8LbyW7uq /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.11 /root/ansible/bin/ansible-test units -v yum --containers '{}' --requirements-mode only --truncate 211 --color yes --host-path test/results/.tmp/host-jnbe6ait --metadata test/results/.tmp/metadata-e6w9ymfb.json" returned exit status 1.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80756
|
https://github.com/ansible/ansible/pull/80801
|
2fd64161c1ae4a8930e9b6094804ac9976a9f2ad
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
| 2023-05-10T18:09:44Z |
python
| 2023-05-17T16:07:04Z |
test/lib/ansible_test/_internal/provider/layout/ansible.py
|
"""Layout provider for Ansible source."""
from __future__ import annotations
import os
from . import (
ContentLayout,
LayoutProvider,
)
class AnsibleLayout(LayoutProvider):
"""Layout provider for Ansible source."""
@staticmethod
def is_content_root(path: str) -> bool:
"""Return True if the given path is a content root for this provider."""
return os.path.exists(os.path.join(path, 'setup.py')) and os.path.exists(os.path.join(path, 'bin/ansible-test'))
def create(self, root: str, paths: list[str]) -> ContentLayout:
"""Create a Layout using the given root and paths."""
plugin_paths = dict((p, os.path.join('lib/ansible/plugins', p)) for p in self.PLUGIN_TYPES)
plugin_paths.update(
modules='lib/ansible/modules',
module_utils='lib/ansible/module_utils',
)
return ContentLayout(
root,
paths,
plugin_paths=plugin_paths,
collection=None,
test_path='test',
results_path='test/results',
sanity_path='test/sanity',
sanity_messages=None,
integration_path='test/integration',
integration_targets_path='test/integration/targets',
integration_vars_path='test/integration/integration_config.yml',
integration_messages=None,
unit_path='test/units',
unit_module_path='test/units/modules',
unit_module_utils_path='test/units/module_utils',
unit_messages=None,
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,756 |
ansible-test --docker: FileNotFoundError
|
### Summary
When I run sanity/unit/integration tests I get:
```
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-92moe40p/settings.dat'
```
### Issue Type
Bug Report
### Component Name
ansible-test sanity/unit/integration
### Ansible Version
```console
$ ansible --version
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible [core 2.16.0.dev0]
config file = None
configured module search path = ['/home/bbarbach/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /home/bbarbach/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.6 (default, Aug 11 2021, 06:39:25) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] (/usr/bin/python3.9)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
RHEL 8
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible-test units --docker -v yum
```
### Expected Results
I expected the unit tests for the yum module to be run
### Actual Results
```console
Traceback (most recent call last):
File "/root/ansible/bin/ansible-test", line 45, in <module>
main()
File "/root/ansible/bin/ansible-test", line 36, in main
cli_main(args)
File "/root/ansible/test/lib/ansible_test/_internal/__init__.py", line 64, in main
args = parse_args(cli_args)
^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/cli/__init__.py", line 58, in parse_args
args.host_settings = HostSettings.deserialize(os.path.join(args.host_path, 'settings.dat'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/host_configs.py", line 533, in deserialize
with open_binary_file(path) as settings_file:
^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/test/lib/ansible_test/_internal/io.py", line 79, in open_binary_file
return io.open(to_bytes(path), mode) # pylint: disable=consider-using-with
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: b'test/results/.tmp/host-jnbe6ait/settings.dat'
FATAL: Command "docker exec ansible-test-controller-8LbyW7uq /usr/bin/env ANSIBLE_TEST_CONTENT_ROOT=/root/ansible LC_ALL=en_US.UTF-8 /usr/bin/python3.11 /root/ansible/bin/ansible-test units -v yum --containers '{}' --requirements-mode only --truncate 211 --color yes --host-path test/results/.tmp/host-jnbe6ait --metadata test/results/.tmp/metadata-e6w9ymfb.json" returned exit status 1.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80756
|
https://github.com/ansible/ansible/pull/80801
|
2fd64161c1ae4a8930e9b6094804ac9976a9f2ad
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
| 2023-05-10T18:09:44Z |
python
| 2023-05-17T16:07:04Z |
test/lib/ansible_test/_internal/provider/layout/collection.py
|
"""Layout provider for Ansible collections."""
from __future__ import annotations
import os
from . import (
ContentLayout,
LayoutProvider,
CollectionDetail,
LayoutMessages,
)
from ...util import (
is_valid_identifier,
)
class CollectionLayout(LayoutProvider):
"""Layout provider for Ansible collections."""
@staticmethod
def is_content_root(path: str) -> bool:
"""Return True if the given path is a content root for this provider."""
if os.path.basename(os.path.dirname(os.path.dirname(path))) == 'ansible_collections':
return True
return False
def create(self, root: str, paths: list[str]) -> ContentLayout:
"""Create a Layout using the given root and paths."""
plugin_paths = dict((p, os.path.join('plugins', p)) for p in self.PLUGIN_TYPES)
collection_root = os.path.dirname(os.path.dirname(root))
collection_dir = os.path.relpath(root, collection_root)
collection_namespace: str
collection_name: str
collection_namespace, collection_name = collection_dir.split(os.path.sep)
collection_root = os.path.dirname(collection_root)
sanity_messages = LayoutMessages()
integration_messages = LayoutMessages()
unit_messages = LayoutMessages()
# these apply to all test commands
self.__check_test_path(paths, sanity_messages)
self.__check_test_path(paths, integration_messages)
self.__check_test_path(paths, unit_messages)
# these apply to specific test commands
integration_targets_path = self.__check_integration_path(paths, integration_messages)
self.__check_unit_path(paths, unit_messages)
return ContentLayout(
root,
paths,
plugin_paths=plugin_paths,
collection=CollectionDetail(
name=collection_name,
namespace=collection_namespace,
root=collection_root,
),
test_path='tests',
results_path='tests/output',
sanity_path='tests/sanity',
sanity_messages=sanity_messages,
integration_path='tests/integration',
integration_targets_path=integration_targets_path.rstrip(os.path.sep),
integration_vars_path='tests/integration/integration_config.yml',
integration_messages=integration_messages,
unit_path='tests/unit',
unit_module_path='tests/unit/plugins/modules',
unit_module_utils_path='tests/unit/plugins/module_utils',
unit_messages=unit_messages,
unsupported=not (is_valid_identifier(collection_namespace) and is_valid_identifier(collection_name)),
)
@staticmethod
def __check_test_path(paths: list[str], messages: LayoutMessages) -> None:
modern_test_path = 'tests/'
modern_test_path_found = any(path.startswith(modern_test_path) for path in paths)
legacy_test_path = 'test/'
legacy_test_path_found = any(path.startswith(legacy_test_path) for path in paths)
if modern_test_path_found and legacy_test_path_found:
messages.warning.append('Ignoring tests in "%s" in favor of "%s".' % (legacy_test_path, modern_test_path))
elif legacy_test_path_found:
messages.warning.append('Ignoring tests in "%s" that should be in "%s".' % (legacy_test_path, modern_test_path))
@staticmethod
def __check_integration_path(paths: list[str], messages: LayoutMessages) -> str:
modern_integration_path = 'roles/test/'
modern_integration_path_found = any(path.startswith(modern_integration_path) for path in paths)
legacy_integration_path = 'tests/integration/targets/'
legacy_integration_path_found = any(path.startswith(legacy_integration_path) for path in paths)
if modern_integration_path_found and legacy_integration_path_found:
messages.warning.append('Ignoring tests in "%s" in favor of "%s".' % (legacy_integration_path, modern_integration_path))
integration_targets_path = modern_integration_path
elif legacy_integration_path_found:
messages.info.append('Falling back to tests in "%s" because "%s" was not found.' % (legacy_integration_path, modern_integration_path))
integration_targets_path = legacy_integration_path
elif modern_integration_path_found:
messages.info.append('Loading tests from "%s".' % modern_integration_path)
integration_targets_path = modern_integration_path
else:
messages.error.append('Cannot run integration tests without "%s" or "%s".' % (modern_integration_path, legacy_integration_path))
integration_targets_path = modern_integration_path
return integration_targets_path
@staticmethod
def __check_unit_path(paths: list[str], messages: LayoutMessages) -> None:
modern_unit_path = 'tests/unit/'
modern_unit_path_found = any(path.startswith(modern_unit_path) for path in paths)
legacy_unit_path = 'tests/units/' # test/units/ will be covered by the warnings for test/ vs tests/
legacy_unit_path_found = any(path.startswith(legacy_unit_path) for path in paths)
if modern_unit_path_found and legacy_unit_path_found:
messages.warning.append('Ignoring tests in "%s" in favor of "%s".' % (legacy_unit_path, modern_unit_path))
elif legacy_unit_path_found:
messages.warning.append('Rename "%s" to "%s" to run unit tests.' % (legacy_unit_path, modern_unit_path))
elif modern_unit_path_found:
pass # unit tests only run from one directory so no message is needed
else:
messages.error.append('Cannot run unit tests without "%s".' % modern_unit_path)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,490 |
Fix use of deprecated parameters in `module_utils/urls.py`
|
### Summary
In Python 3.12 the deprecated `key_file`, `cert_file` and `check_hostname` parameters have been [removed](https://docs.python.org/3.12/library/http.client.html#http.client.HTTPSConnection).
There is code which still attempts to set these, such as:
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/module_utils/urls.py#L604-L608
Which results in an error under Python 3.12:
```
> return httplib.HTTPSConnection(host, **kwargs)
E TypeError: HTTPSConnection.__init__() got an unexpected keyword argument 'cert_file'
```
### Issue Type
Feature Idea
### Component Name
module_utils/urls.py
|
https://github.com/ansible/ansible/issues/80490
|
https://github.com/ansible/ansible/pull/80751
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
| 2023-04-12T02:28:32Z |
python
| 2023-05-17T22:17:25Z |
changelogs/fragments/urls-client-cert-py12.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,490 |
Fix use of deprecated parameters in `module_utils/urls.py`
|
### Summary
In Python 3.12 the deprecated `key_file`, `cert_file` and `check_hostname` parameters have been [removed](https://docs.python.org/3.12/library/http.client.html#http.client.HTTPSConnection).
There is code which still attempts to set these, such as:
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/module_utils/urls.py#L604-L608
Which results in an error under Python 3.12:
```
> return httplib.HTTPSConnection(host, **kwargs)
E TypeError: HTTPSConnection.__init__() got an unexpected keyword argument 'cert_file'
```
### Issue Type
Feature Idea
### Component Name
module_utils/urls.py
|
https://github.com/ansible/ansible/issues/80490
|
https://github.com/ansible/ansible/pull/80751
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
| 2023-04-12T02:28:32Z |
python
| 2023-05-17T22:17:25Z |
lib/ansible/module_utils/urls.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]>, 2015
#
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License.
#
# PSF License (see licenses/PSF-license.txt or https://opensource.org/licenses/Python-2.0)
'''
The **urls** utils module offers a replacement for the urllib2 python library.
urllib2 is the python stdlib way to retrieve files from the Internet but it
lacks some security features (around verifying SSL certificates) that users
should care about in most situations. Using the functions in this module corrects
deficiencies in the urllib2 module wherever possible.
There are also third-party libraries (for instance, requests) which can be used
to replace urllib2 with a more secure library. However, all third party libraries
require that the library be installed on the managed machine. That is an extra step
for users making use of a module. If possible, avoid third party libraries by using
this code instead.
'''
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import base64
import email.mime.multipart
import email.mime.nonmultipart
import email.mime.application
import email.parser
import email.utils
import functools
import io
import mimetypes
import netrc
import os
import platform
import re
import socket
import sys
import tempfile
import traceback
import types # pylint: disable=unused-import
from contextlib import contextmanager
try:
import gzip
HAS_GZIP = True
GZIP_IMP_ERR = None
except ImportError:
HAS_GZIP = False
GZIP_IMP_ERR = traceback.format_exc()
GzipFile = object
else:
GzipFile = gzip.GzipFile # type: ignore[assignment,misc]
try:
import email.policy
except ImportError:
# Py2
import email.generator
try:
import httplib
except ImportError:
# Python 3
import http.client as httplib # type: ignore[no-redef]
import ansible.module_utils.compat.typing as t
import ansible.module_utils.six.moves.http_cookiejar as cookiejar
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible.module_utils.common.collections import Mapping, is_sequence
from ansible.module_utils.six import PY2, PY3, string_types
from ansible.module_utils.six.moves import cStringIO
from ansible.module_utils.basic import get_distribution, missing_required_lib
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
try:
# python3
import urllib.request as urllib_request
from urllib.request import AbstractHTTPHandler, BaseHandler
except ImportError:
# python2
import urllib2 as urllib_request # type: ignore[no-redef]
from urllib2 import AbstractHTTPHandler, BaseHandler # type: ignore[no-redef]
urllib_request.HTTPRedirectHandler.http_error_308 = urllib_request.HTTPRedirectHandler.http_error_307 # type: ignore[attr-defined,assignment]
try:
from ansible.module_utils.six.moves.urllib.parse import urlparse, urlunparse, unquote
HAS_URLPARSE = True
except Exception:
HAS_URLPARSE = False
try:
import ssl
HAS_SSL = True
except Exception:
HAS_SSL = False
try:
# SNI Handling needs python2.7.9's SSLContext
from ssl import create_default_context, SSLContext # pylint: disable=unused-import
HAS_SSLCONTEXT = True
except ImportError:
HAS_SSLCONTEXT = False
# SNI Handling for python < 2.7.9 with urllib3 support
HAS_URLLIB3_PYOPENSSLCONTEXT = False
HAS_URLLIB3_SSL_WRAP_SOCKET = False
if not HAS_SSLCONTEXT:
try:
# urllib3>=1.15
try:
from urllib3.contrib.pyopenssl import PyOpenSSLContext
except Exception:
from requests.packages.urllib3.contrib.pyopenssl import PyOpenSSLContext
HAS_URLLIB3_PYOPENSSLCONTEXT = True
except Exception:
# urllib3<1.15,>=1.6
try:
try:
from urllib3.contrib.pyopenssl import ssl_wrap_socket
except Exception:
from requests.packages.urllib3.contrib.pyopenssl import ssl_wrap_socket
HAS_URLLIB3_SSL_WRAP_SOCKET = True
except Exception:
pass
# Select a protocol that includes all secure tls protocols
# Exclude insecure ssl protocols if possible
if HAS_SSL:
# If we can't find extra tls methods, ssl.PROTOCOL_TLSv1 is sufficient
PROTOCOL = ssl.PROTOCOL_TLSv1
if not HAS_SSLCONTEXT and HAS_SSL:
try:
import ctypes
import ctypes.util
except ImportError:
# python 2.4 (likely rhel5 which doesn't have tls1.1 support in its openssl)
pass
else:
libssl_name = ctypes.util.find_library('ssl')
libssl = ctypes.CDLL(libssl_name)
for method in ('TLSv1_1_method', 'TLSv1_2_method'):
try:
libssl[method] # pylint: disable=pointless-statement
# Found something - we'll let openssl autonegotiate and hope
# the server has disabled sslv2 and 3. best we can do.
PROTOCOL = ssl.PROTOCOL_SSLv23
break
except AttributeError:
pass
del libssl
# The following makes it easier for us to script updates of the bundled backports.ssl_match_hostname
# The bundled backports.ssl_match_hostname should really be moved into its own file for processing
_BUNDLED_METADATA = {"pypi_name": "backports.ssl_match_hostname", "version": "3.7.0.1"}
LOADED_VERIFY_LOCATIONS = set() # type: t.Set[str]
HAS_MATCH_HOSTNAME = True
try:
from ssl import match_hostname, CertificateError
except ImportError:
try:
from backports.ssl_match_hostname import match_hostname, CertificateError # type: ignore[assignment]
except ImportError:
HAS_MATCH_HOSTNAME = False
HAS_CRYPTOGRAPHY = True
try:
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.exceptions import UnsupportedAlgorithm
except ImportError:
HAS_CRYPTOGRAPHY = False
# Old import for GSSAPI authentication, this is not used in urls.py but kept for backwards compatibility.
try:
import urllib_gssapi # pylint: disable=unused-import
HAS_GSSAPI = True
except ImportError:
HAS_GSSAPI = False
GSSAPI_IMP_ERR = None
try:
import gssapi
class HTTPGSSAPIAuthHandler(BaseHandler):
""" Handles Negotiate/Kerberos support through the gssapi library. """
AUTH_HEADER_PATTERN = re.compile(r'(?:.*)\s*(Negotiate|Kerberos)\s*([^,]*),?', re.I)
handler_order = 480 # Handle before Digest authentication
def __init__(self, username=None, password=None):
self.username = username
self.password = password
self._context = None
def get_auth_value(self, headers):
auth_match = self.AUTH_HEADER_PATTERN.search(headers.get('www-authenticate', ''))
if auth_match:
return auth_match.group(1), base64.b64decode(auth_match.group(2))
def http_error_401(self, req, fp, code, msg, headers):
# If we've already attempted the auth and we've reached this again then there was a failure.
if self._context:
return
parsed = generic_urlparse(urlparse(req.get_full_url()))
auth_header = self.get_auth_value(headers)
if not auth_header:
return
auth_protocol, in_token = auth_header
username = None
if self.username:
username = gssapi.Name(self.username, name_type=gssapi.NameType.user)
if username and self.password:
if not hasattr(gssapi.raw, 'acquire_cred_with_password'):
raise NotImplementedError("Platform GSSAPI library does not support "
"gss_acquire_cred_with_password, cannot acquire GSSAPI credential with "
"explicit username and password.")
b_password = to_bytes(self.password, errors='surrogate_or_strict')
cred = gssapi.raw.acquire_cred_with_password(username, b_password, usage='initiate').creds
else:
cred = gssapi.Credentials(name=username, usage='initiate')
# Get the peer certificate for the channel binding token if possible (HTTPS). A bug on macOS causes the
# authentication to fail when the CBT is present. Just skip that platform.
cbt = None
cert = getpeercert(fp, True)
if cert and platform.system() != 'Darwin':
cert_hash = get_channel_binding_cert_hash(cert)
if cert_hash:
cbt = gssapi.raw.ChannelBindings(application_data=b"tls-server-end-point:" + cert_hash)
# TODO: We could add another option that is set to include the port in the SPN if desired in the future.
target = gssapi.Name("HTTP@%s" % parsed['hostname'], gssapi.NameType.hostbased_service)
self._context = gssapi.SecurityContext(usage="initiate", name=target, creds=cred, channel_bindings=cbt)
resp = None
while not self._context.complete:
out_token = self._context.step(in_token)
if not out_token:
break
auth_header = '%s %s' % (auth_protocol, to_native(base64.b64encode(out_token)))
req.add_unredirected_header('Authorization', auth_header)
resp = self.parent.open(req)
# The response could contain a token that the client uses to validate the server
auth_header = self.get_auth_value(resp.headers)
if not auth_header:
break
in_token = auth_header[1]
return resp
except ImportError:
GSSAPI_IMP_ERR = traceback.format_exc()
HTTPGSSAPIAuthHandler = None # type: types.ModuleType | None # type: ignore[no-redef]
if not HAS_MATCH_HOSTNAME:
# The following block of code is under the terms and conditions of the
# Python Software Foundation License
# The match_hostname() function from Python 3.4, essential when using SSL.
try:
# Divergence: Python-3.7+'s _ssl has this exception type but older Pythons do not
from _ssl import SSLCertVerificationError
CertificateError = SSLCertVerificationError # type: ignore[misc]
except ImportError:
class CertificateError(ValueError): # type: ignore[no-redef]
pass
def _dnsname_match(dn, hostname):
"""Matching according to RFC 6125, section 6.4.3
- Hostnames are compared lower case.
- For IDNA, both dn and hostname must be encoded as IDN A-label (ACE).
- Partial wildcards like 'www*.example.org', multiple wildcards, sole
wildcard or wildcards in labels other then the left-most label are not
supported and a CertificateError is raised.
- A wildcard must match at least one character.
"""
if not dn:
return False
wildcards = dn.count('*')
# speed up common case w/o wildcards
if not wildcards:
return dn.lower() == hostname.lower()
if wildcards > 1:
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"too many wildcards in certificate DNS name: %s" % repr(dn))
dn_leftmost, sep, dn_remainder = dn.partition('.')
if '*' in dn_remainder:
# Only match wildcard in leftmost segment.
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"wildcard can only be present in the leftmost label: "
"%s." % repr(dn))
if not sep:
# no right side
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"sole wildcard without additional labels are not support: "
"%s." % repr(dn))
if dn_leftmost != '*':
# no partial wildcard matching
# Divergence .format() to percent formatting for Python < 2.6
raise CertificateError(
"partial wildcards in leftmost label are not supported: "
"%s." % repr(dn))
hostname_leftmost, sep, hostname_remainder = hostname.partition('.')
if not hostname_leftmost or not sep:
# wildcard must match at least one char
return False
return dn_remainder.lower() == hostname_remainder.lower()
def _inet_paton(ipname):
"""Try to convert an IP address to packed binary form
Supports IPv4 addresses on all platforms and IPv6 on platforms with IPv6
support.
"""
# inet_aton() also accepts strings like '1'
# Divergence: We make sure we have native string type for all python versions
try:
b_ipname = to_bytes(ipname, errors='strict')
except UnicodeError:
raise ValueError("%s must be an all-ascii string." % repr(ipname))
# Set ipname in native string format
if sys.version_info < (3,):
n_ipname = b_ipname
else:
n_ipname = ipname
if n_ipname.count('.') == 3:
try:
return socket.inet_aton(n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
pass
try:
return socket.inet_pton(socket.AF_INET6, n_ipname)
# Divergence: OSError on late python3. socket.error earlier.
# Null bytes generate ValueError on python3(we want to raise
# ValueError anyway), TypeError # earlier
except (OSError, socket.error, TypeError):
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is neither an IPv4 nor an IP6 "
"address." % repr(ipname))
except AttributeError:
# AF_INET6 not available
pass
# Divergence .format() to percent formatting for Python < 2.6
raise ValueError("%s is not an IPv4 address." % repr(ipname))
def _ipaddress_match(ipname, host_ip):
"""Exact matching of IP addresses.
RFC 6125 explicitly doesn't define an algorithm for this
(section 1.7.2 - "Out of Scope").
"""
# OpenSSL may add a trailing newline to a subjectAltName's IP address
ip = _inet_paton(ipname.rstrip())
return ip == host_ip
def match_hostname(cert, hostname): # type: ignore[misc]
"""Verify that *cert* (in decoded format as returned by
SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
rules are followed.
The function matches IP addresses rather than dNSNames if hostname is a
valid ipaddress string. IPv4 addresses are supported on all platforms.
IPv6 addresses are supported on platforms with IPv6 support (AF_INET6
and inet_pton).
CertificateError is raised on failure. On success, the function
returns nothing.
"""
if not cert:
raise ValueError("empty or no certificate, match_hostname needs a "
"SSL socket or SSL context with either "
"CERT_OPTIONAL or CERT_REQUIRED")
try:
# Divergence: Deal with hostname as bytes
host_ip = _inet_paton(to_text(hostname, errors='strict'))
except UnicodeError:
# Divergence: Deal with hostname as byte strings.
# IP addresses should be all ascii, so we consider it not
# an IP address if this fails
host_ip = None
except ValueError:
# Not an IP address (common case)
host_ip = None
dnsnames = []
san = cert.get('subjectAltName', ())
for key, value in san:
if key == 'DNS':
if host_ip is None and _dnsname_match(value, hostname):
return
dnsnames.append(value)
elif key == 'IP Address':
if host_ip is not None and _ipaddress_match(value, host_ip):
return
dnsnames.append(value)
if not dnsnames:
# The subject is only checked when there is no dNSName entry
# in subjectAltName
for sub in cert.get('subject', ()):
for key, value in sub:
# XXX according to RFC 2818, the most specific Common Name
# must be used.
if key == 'commonName':
if _dnsname_match(value, hostname):
return
dnsnames.append(value)
if len(dnsnames) > 1:
raise CertificateError("hostname %r doesn't match either of %s" % (hostname, ', '.join(map(repr, dnsnames))))
elif len(dnsnames) == 1:
raise CertificateError("hostname %r doesn't match %r" % (hostname, dnsnames[0]))
else:
raise CertificateError("no appropriate commonName or subjectAltName fields were found")
# End of Python Software Foundation Licensed code
HAS_MATCH_HOSTNAME = True
# This is a dummy cacert provided for macOS since you need at least 1
# ca cert, regardless of validity, for Python on macOS to use the
# keychain functionality in OpenSSL for validating SSL certificates.
# See: http://mercurial.selenic.com/wiki/CACertificates#Mac_OS_X_10.6_and_higher
b_DUMMY_CA_CERT = b"""-----BEGIN CERTIFICATE-----
MIICvDCCAiWgAwIBAgIJAO8E12S7/qEpMA0GCSqGSIb3DQEBBQUAMEkxCzAJBgNV
BAYTAlVTMRcwFQYDVQQIEw5Ob3J0aCBDYXJvbGluYTEPMA0GA1UEBxMGRHVyaGFt
MRAwDgYDVQQKEwdBbnNpYmxlMB4XDTE0MDMxODIyMDAyMloXDTI0MDMxNTIyMDAy
MlowSTELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMQ8wDQYD
VQQHEwZEdXJoYW0xEDAOBgNVBAoTB0Fuc2libGUwgZ8wDQYJKoZIhvcNAQEBBQAD
gY0AMIGJAoGBANtvpPq3IlNlRbCHhZAcP6WCzhc5RbsDqyh1zrkmLi0GwcQ3z/r9
gaWfQBYhHpobK2Tiq11TfraHeNB3/VfNImjZcGpN8Fl3MWwu7LfVkJy3gNNnxkA1
4Go0/LmIvRFHhbzgfuo9NFgjPmmab9eqXJceqZIlz2C8xA7EeG7ku0+vAgMBAAGj
gaswgagwHQYDVR0OBBYEFPnN1nPRqNDXGlCqCvdZchRNi/FaMHkGA1UdIwRyMHCA
FPnN1nPRqNDXGlCqCvdZchRNi/FaoU2kSzBJMQswCQYDVQQGEwJVUzEXMBUGA1UE
CBMOTm9ydGggQ2Fyb2xpbmExDzANBgNVBAcTBkR1cmhhbTEQMA4GA1UEChMHQW5z
aWJsZYIJAO8E12S7/qEpMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEA
MUB80IR6knq9K/tY+hvPsZer6eFMzO3JGkRFBh2kn6JdMDnhYGX7AXVHGflrwNQH
qFy+aenWXsC0ZvrikFxbQnX8GVtDADtVznxOi7XzFw7JOxdsVrpXgSN0eh0aMzvV
zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg=
-----END CERTIFICATE-----
"""
b_PEM_CERT_RE = re.compile(
br'^-----BEGIN CERTIFICATE-----\n.+?-----END CERTIFICATE-----$',
flags=re.M | re.S
)
#
# Exceptions
#
class ConnectionError(Exception):
"""Failed to connect to the server"""
pass
class ProxyError(ConnectionError):
"""Failure to connect because of a proxy"""
pass
class SSLValidationError(ConnectionError):
"""Failure to connect due to SSL validation failing"""
pass
class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate"""
pass
class MissingModuleError(Exception):
"""Failed to import 3rd party module required by the caller"""
def __init__(self, message, import_traceback, module=None):
super(MissingModuleError, self).__init__(message)
self.import_traceback = import_traceback
self.module = module
# Some environments (Google Compute Engine's CoreOS deploys) do not compile
# against openssl and thus do not have any HTTPS support.
CustomHTTPSConnection = None
CustomHTTPSHandler = None
HTTPSClientAuthHandler = None
UnixHTTPSConnection = None
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib_request, 'HTTPSHandler'):
class CustomHTTPSConnection(httplib.HTTPSConnection): # type: ignore[no-redef]
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
self.context = None
if HAS_SSLCONTEXT:
self.context = self._context
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
self.context = self._context = PyOpenSSLContext(PROTOCOL)
if self.context and self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
def connect(self):
"Connect to a host on a given (SSL) port."
if hasattr(self, 'source_address'):
sock = socket.create_connection((self.host, self.port), self.timeout, self.source_address)
else:
sock = socket.create_connection((self.host, self.port), self.timeout)
server_hostname = self.host
# Note: self._tunnel_host is not available on py < 2.6 but this code
# isn't used on py < 2.6 (lack of create_connection)
if self._tunnel_host:
self.sock = sock
self._tunnel()
server_hostname = self._tunnel_host
if HAS_SSLCONTEXT or HAS_URLLIB3_PYOPENSSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=server_hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
self.sock = ssl_wrap_socket(sock, keyfile=self.key_file, cert_reqs=ssl.CERT_NONE, # pylint: disable=used-before-assignment
certfile=self.cert_file, ssl_version=PROTOCOL, server_hostname=server_hostname)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
class CustomHTTPSHandler(urllib_request.HTTPSHandler): # type: ignore[no-redef]
def https_open(self, req):
kwargs = {}
if HAS_SSLCONTEXT:
kwargs['context'] = self._context
return self.do_open(
functools.partial(
CustomHTTPSConnection,
**kwargs
),
req
)
https_request = AbstractHTTPHandler.do_request_
class HTTPSClientAuthHandler(urllib_request.HTTPSHandler): # type: ignore[no-redef]
'''Handles client authentication via cert/key
This is a fairly lightweight extension on HTTPSHandler, and can be used
in place of HTTPSHandler
'''
def __init__(self, client_cert=None, client_key=None, unix_socket=None, **kwargs):
urllib_request.HTTPSHandler.__init__(self, **kwargs)
self.client_cert = client_cert
self.client_key = client_key
self._unix_socket = unix_socket
def https_open(self, req):
return self.do_open(self._build_https_connection, req)
def _build_https_connection(self, host, **kwargs):
kwargs.update({
'cert_file': self.client_cert,
'key_file': self.client_key,
})
try:
kwargs['context'] = self._context
except AttributeError:
pass
if self._unix_socket:
return UnixHTTPSConnection(self._unix_socket)(host, **kwargs)
if not HAS_SSLCONTEXT:
return CustomHTTPSConnection(host, **kwargs)
return httplib.HTTPSConnection(host, **kwargs)
@contextmanager
def unix_socket_patch_httpconnection_connect():
'''Monkey patch ``httplib.HTTPConnection.connect`` to be ``UnixHTTPConnection.connect``
so that when calling ``super(UnixHTTPSConnection, self).connect()`` we get the
correct behavior of creating self.sock for the unix socket
'''
_connect = httplib.HTTPConnection.connect
httplib.HTTPConnection.connect = UnixHTTPConnection.connect
yield
httplib.HTTPConnection.connect = _connect
class UnixHTTPSConnection(httplib.HTTPSConnection): # type: ignore[no-redef]
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
# This method exists simply to ensure we monkeypatch
# httplib.HTTPConnection.connect to call UnixHTTPConnection.connect
with unix_socket_patch_httpconnection_connect():
# Disable pylint check for the super() call. It complains about UnixHTTPSConnection
# being a NoneType because of the initial definition above, but it won't actually
# be a NoneType when this code runs
# pylint: disable=bad-super-call
super(UnixHTTPSConnection, self).connect()
def __call__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPConnection(httplib.HTTPConnection):
'''Handles http requests to a unix socket file'''
def __init__(self, unix_socket):
self._unix_socket = unix_socket
def connect(self):
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
self.sock.connect(self._unix_socket)
except OSError as e:
raise OSError('Invalid Socket File (%s): %s' % (self._unix_socket, e))
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
self.sock.settimeout(self.timeout)
def __call__(self, *args, **kwargs):
httplib.HTTPConnection.__init__(self, *args, **kwargs)
return self
class UnixHTTPHandler(urllib_request.HTTPHandler):
'''Handler for Unix urls'''
def __init__(self, unix_socket, **kwargs):
urllib_request.HTTPHandler.__init__(self, **kwargs)
self._unix_socket = unix_socket
def http_open(self, req):
return self.do_open(UnixHTTPConnection(self._unix_socket), req)
class ParseResultDottedDict(dict):
'''
A dict that acts similarly to the ParseResult named tuple from urllib
'''
def __init__(self, *args, **kwargs):
super(ParseResultDottedDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def as_list(self):
'''
Generate a list from this dict, that looks like the ParseResult named tuple
'''
return [self.get(k, None) for k in ('scheme', 'netloc', 'path', 'params', 'query', 'fragment')]
def generic_urlparse(parts):
'''
Returns a dictionary of url parts as parsed by urlparse,
but accounts for the fact that older versions of that
library do not support named attributes (ie. .netloc)
'''
generic_parts = ParseResultDottedDict()
if hasattr(parts, 'netloc'):
# urlparse is newer, just read the fields straight
# from the parts object
generic_parts['scheme'] = parts.scheme
generic_parts['netloc'] = parts.netloc
generic_parts['path'] = parts.path
generic_parts['params'] = parts.params
generic_parts['query'] = parts.query
generic_parts['fragment'] = parts.fragment
generic_parts['username'] = parts.username
generic_parts['password'] = parts.password
hostname = parts.hostname
if hostname and hostname[0] == '[' and '[' in parts.netloc and ']' in parts.netloc:
# Py2.6 doesn't parse IPv6 addresses correctly
hostname = parts.netloc.split(']')[0][1:].lower()
generic_parts['hostname'] = hostname
try:
port = parts.port
except ValueError:
# Py2.6 doesn't parse IPv6 addresses correctly
netloc = parts.netloc.split('@')[-1].split(']')[-1]
if ':' in netloc:
port = netloc.split(':')[1]
if port:
port = int(port)
else:
port = None
generic_parts['port'] = port
else:
# we have to use indexes, and then parse out
# the other parts not supported by indexing
generic_parts['scheme'] = parts[0]
generic_parts['netloc'] = parts[1]
generic_parts['path'] = parts[2]
generic_parts['params'] = parts[3]
generic_parts['query'] = parts[4]
generic_parts['fragment'] = parts[5]
# get the username, password, etc.
try:
netloc_re = re.compile(r'^((?:\w)+(?::(?:\w)+)?@)?([A-Za-z0-9.-]+)(:\d+)?$')
match = netloc_re.match(parts[1])
auth = match.group(1)
hostname = match.group(2)
port = match.group(3)
if port:
# the capture group for the port will include the ':',
# so remove it and convert the port to an integer
port = int(port[1:])
if auth:
# the capture group above includes the @, so remove it
# and then split it up based on the first ':' found
auth = auth[:-1]
username, password = auth.split(':', 1)
else:
username = password = None
generic_parts['username'] = username
generic_parts['password'] = password
generic_parts['hostname'] = hostname
generic_parts['port'] = port
except Exception:
generic_parts['username'] = None
generic_parts['password'] = None
generic_parts['hostname'] = parts[1]
generic_parts['port'] = None
return generic_parts
def extract_pem_certs(b_data):
for match in b_PEM_CERT_RE.finditer(b_data):
yield match.group(0)
def get_response_filename(response):
url = response.geturl()
path = urlparse(url)[2]
filename = os.path.basename(path.rstrip('/')) or None
if filename:
filename = unquote(filename)
return response.headers.get_param('filename', header='content-disposition') or filename
def parse_content_type(response):
if PY2:
get_type = response.headers.gettype
get_param = response.headers.getparam
else:
get_type = response.headers.get_content_type
get_param = response.headers.get_param
content_type = (get_type() or 'application/octet-stream').split(',')[0]
main_type, sub_type = content_type.split('/')
charset = (get_param('charset') or 'utf-8').split(',')[0]
return content_type, main_type, sub_type, charset
class GzipDecodedReader(GzipFile):
"""A file-like object to decode a response encoded with the gzip
method, as described in RFC 1952.
Largely copied from ``xmlrpclib``/``xmlrpc.client``
"""
def __init__(self, fp):
if not HAS_GZIP:
raise MissingModuleError(self.missing_gzip_error(), import_traceback=GZIP_IMP_ERR)
if PY3:
self._io = fp
else:
# Py2 ``HTTPResponse``/``addinfourl`` doesn't support all of the file object
# functionality GzipFile requires
self._io = io.BytesIO()
for block in iter(functools.partial(fp.read, 65536), b''):
self._io.write(block)
self._io.seek(0)
fp.close()
gzip.GzipFile.__init__(self, mode='rb', fileobj=self._io) # pylint: disable=non-parent-init-called
def close(self):
try:
gzip.GzipFile.close(self)
finally:
self._io.close()
@staticmethod
def missing_gzip_error():
return missing_required_lib(
'gzip',
reason='to decompress gzip encoded responses. '
'Set "decompress" to False, to prevent attempting auto decompression'
)
class RequestWithMethod(urllib_request.Request):
'''
Workaround for using DELETE/PUT/etc with urllib2
Originally contained in library/net_infrastructure/dnsmadeeasy
'''
def __init__(self, url, method, data=None, headers=None, origin_req_host=None, unverifiable=True):
if headers is None:
headers = {}
self._method = method.upper()
urllib_request.Request.__init__(self, url, data, headers, origin_req_host, unverifiable)
def get_method(self):
if self._method:
return self._method
else:
return urllib_request.Request.get_method(self)
def RedirectHandlerFactory(follow_redirects=None, validate_certs=True, ca_path=None, ciphers=None):
"""This is a class factory that closes over the value of
``follow_redirects`` so that the RedirectHandler class has access to
that value without having to use globals, and potentially cause problems
where ``open_url`` or ``fetch_url`` are used multiple times in a module.
"""
class RedirectHandler(urllib_request.HTTPRedirectHandler):
"""This is an implementation of a RedirectHandler to match the
functionality provided by httplib2. It will utilize the value of
``follow_redirects`` that is passed into ``RedirectHandlerFactory``
to determine how redirects should be handled in urllib2.
"""
def redirect_request(self, req, fp, code, msg, hdrs, newurl):
if not any((HAS_SSLCONTEXT, HAS_URLLIB3_PYOPENSSLCONTEXT)):
handler = maybe_add_ssl_handler(newurl, validate_certs, ca_path=ca_path, ciphers=ciphers)
if handler:
urllib_request._opener.add_handler(handler)
# Preserve urllib2 compatibility
if follow_redirects == 'urllib2':
return urllib_request.HTTPRedirectHandler.redirect_request(self, req, fp, code, msg, hdrs, newurl)
# Handle disabled redirects
elif follow_redirects in ['no', 'none', False]:
raise urllib_error.HTTPError(newurl, code, msg, hdrs, fp)
method = req.get_method()
# Handle non-redirect HTTP status or invalid follow_redirects
if follow_redirects in ['all', 'yes', True]:
if code < 300 or code >= 400:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
elif follow_redirects == 'safe':
if code < 300 or code >= 400 or method not in ('GET', 'HEAD'):
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
else:
raise urllib_error.HTTPError(req.get_full_url(), code, msg, hdrs, fp)
try:
# Python 2-3.3
data = req.get_data()
origin_req_host = req.get_origin_req_host()
except AttributeError:
# Python 3.4+
data = req.data
origin_req_host = req.origin_req_host
# Be conciliant with URIs containing a space
newurl = newurl.replace(' ', '%20')
# Support redirect with payload and original headers
if code in (307, 308):
# Preserve payload and headers
headers = req.headers
else:
# Do not preserve payload and filter headers
data = None
headers = dict((k, v) for k, v in req.headers.items()
if k.lower() not in ("content-length", "content-type", "transfer-encoding"))
# http://tools.ietf.org/html/rfc7231#section-6.4.4
if code == 303 and method != 'HEAD':
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if code == 302 and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
if code == 301 and method == 'POST':
method = 'GET'
return RequestWithMethod(newurl,
method=method,
headers=headers,
data=data,
origin_req_host=origin_req_host,
unverifiable=True,
)
return RedirectHandler
def build_ssl_validation_error(hostname, port, paths, exc=None):
'''Inteligently build out the SSLValidationError based on what support
you have installed
'''
msg = [
('Failed to validate the SSL certificate for %s:%s.'
' Make sure your managed systems have a valid CA'
' certificate installed.')
]
if not HAS_SSLCONTEXT:
msg.append('If the website serving the url uses SNI you need'
' python >= 2.7.9 on your managed machine')
msg.append(' (the python executable used (%s) is version: %s)' %
(sys.executable, ''.join(sys.version.splitlines())))
if not HAS_URLLIB3_PYOPENSSLCONTEXT and not HAS_URLLIB3_SSL_WRAP_SOCKET:
msg.append('or you can install the `urllib3`, `pyOpenSSL`,'
' `ndg-httpsclient`, and `pyasn1` python modules')
msg.append('to perform SNI verification in python >= 2.6.')
msg.append('You can use validate_certs=False if you do'
' not need to confirm the servers identity but this is'
' unsafe and not recommended.'
' Paths checked for this platform: %s.')
if exc:
msg.append('The exception msg was: %s.' % to_native(exc))
raise SSLValidationError(' '.join(msg) % (hostname, port, ", ".join(paths)))
def atexit_remove_file(filename):
if os.path.exists(filename):
try:
os.unlink(filename)
except Exception:
# just ignore if we cannot delete, things should be ok
pass
def make_context(cafile=None, cadata=None, ciphers=None, validate_certs=True):
if ciphers is None:
ciphers = []
if not is_sequence(ciphers):
raise TypeError('Ciphers must be a list. Got %s.' % ciphers.__class__.__name__)
if HAS_SSLCONTEXT:
context = create_default_context(cafile=cafile)
elif HAS_URLLIB3_PYOPENSSLCONTEXT:
context = PyOpenSSLContext(PROTOCOL)
else:
raise NotImplementedError('Host libraries are too old to support creating an sslcontext')
if not validate_certs:
if ssl.OP_NO_SSLv2:
context.options |= ssl.OP_NO_SSLv2
context.options |= ssl.OP_NO_SSLv3
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
if validate_certs and any((cafile, cadata)):
context.load_verify_locations(cafile=cafile, cadata=cadata)
if ciphers:
context.set_ciphers(':'.join(map(to_native, ciphers)))
return context
def get_ca_certs(cafile=None):
# tries to find a valid CA cert in one of the
# standard locations for the current distribution
cadata = bytearray()
paths_checked = []
if cafile:
paths_checked = [cafile]
with open(to_bytes(cafile, errors='surrogate_or_strict'), 'rb') as f:
if HAS_SSLCONTEXT:
for b_pem in extract_pem_certs(f.read()):
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_pem, errors='surrogate_or_strict')
)
)
return cafile, cadata, paths_checked
if not HAS_SSLCONTEXT:
paths_checked.append('/etc/ssl/certs')
system = to_text(platform.system(), errors='surrogate_or_strict')
# build a list of paths to check for .crt/.pem files
# based on the platform type
if system == u'Linux':
paths_checked.append('/etc/pki/ca-trust/extracted/pem')
paths_checked.append('/etc/pki/tls/certs')
paths_checked.append('/usr/share/ca-certificates/cacert.org')
elif system == u'FreeBSD':
paths_checked.append('/usr/local/share/certs')
elif system == u'OpenBSD':
paths_checked.append('/etc/ssl')
elif system == u'NetBSD':
paths_checked.append('/etc/openssl/certs')
elif system == u'SunOS':
paths_checked.append('/opt/local/etc/openssl/certs')
elif system == u'AIX':
paths_checked.append('/var/ssl/certs')
paths_checked.append('/opt/freeware/etc/ssl/certs')
# fall back to a user-deployed cert in a standard
# location if the OS platform one is not available
paths_checked.append('/etc/ansible')
tmp_path = None
if not HAS_SSLCONTEXT:
tmp_fd, tmp_path = tempfile.mkstemp()
atexit.register(atexit_remove_file, tmp_path)
# Write the dummy ca cert if we are running on macOS
if system == u'Darwin':
if HAS_SSLCONTEXT:
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_DUMMY_CA_CERT, errors='surrogate_or_strict')
)
)
else:
os.write(tmp_fd, b_DUMMY_CA_CERT)
# Default Homebrew path for OpenSSL certs
paths_checked.append('/usr/local/etc/openssl')
# for all of the paths, find any .crt or .pem files
# and compile them into single temp file for use
# in the ssl check to speed up the test
for path in paths_checked:
if not os.path.isdir(path):
continue
dir_contents = os.listdir(path)
for f in dir_contents:
full_path = os.path.join(path, f)
if os.path.isfile(full_path) and os.path.splitext(f)[1] in ('.crt', '.pem'):
try:
if full_path not in LOADED_VERIFY_LOCATIONS:
with open(full_path, 'rb') as cert_file:
b_cert = cert_file.read()
if HAS_SSLCONTEXT:
try:
for b_pem in extract_pem_certs(b_cert):
cadata.extend(
ssl.PEM_cert_to_DER_cert(
to_native(b_pem, errors='surrogate_or_strict')
)
)
except Exception:
continue
else:
os.write(tmp_fd, b_cert)
os.write(tmp_fd, b'\n')
except (OSError, IOError):
pass
if HAS_SSLCONTEXT:
default_verify_paths = ssl.get_default_verify_paths()
paths_checked[:0] = [default_verify_paths.capath]
else:
os.close(tmp_fd)
return (tmp_path, cadata, paths_checked)
class SSLValidationHandler(urllib_request.BaseHandler):
'''
A custom handler class for SSL validation.
Based on:
http://stackoverflow.com/questions/1087227/validate-ssl-certificates-with-python
http://techknack.net/python-urllib2-handlers/
'''
CONNECT_COMMAND = "CONNECT %s:%s HTTP/1.0\r\n"
def __init__(self, hostname, port, ca_path=None, ciphers=None, validate_certs=True):
self.hostname = hostname
self.port = port
self.ca_path = ca_path
self.ciphers = ciphers
self.validate_certs = validate_certs
def get_ca_certs(self):
return get_ca_certs(self.ca_path)
def validate_proxy_response(self, response, valid_codes=None):
'''
make sure we get back a valid code from the proxy
'''
valid_codes = [200] if valid_codes is None else valid_codes
try:
(http_version, resp_code, msg) = re.match(br'(HTTP/\d\.\d) (\d\d\d) (.*)', response).groups()
if int(resp_code) not in valid_codes:
raise Exception
except Exception:
raise ProxyError('Connection to proxy failed')
def detect_no_proxy(self, url):
'''
Detect if the 'no_proxy' environment variable is set and honor those locations.
'''
env_no_proxy = os.environ.get('no_proxy')
if env_no_proxy:
env_no_proxy = env_no_proxy.split(',')
netloc = urlparse(url).netloc
for host in env_no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# Our requested URL matches something in no_proxy, so don't
# use the proxy for this
return False
return True
def make_context(self, cafile, cadata, ciphers=None, validate_certs=True):
cafile = self.ca_path or cafile
if self.ca_path:
cadata = None
else:
cadata = cadata or None
return make_context(cafile=cafile, cadata=cadata, ciphers=ciphers, validate_certs=validate_certs)
def http_request(self, req):
tmp_ca_cert_path, cadata, paths_checked = self.get_ca_certs()
# Detect if 'no_proxy' environment variable is set and if our URL is included
use_proxy = self.detect_no_proxy(req.get_full_url())
https_proxy = os.environ.get('https_proxy')
context = None
try:
context = self.make_context(tmp_ca_cert_path, cadata, ciphers=self.ciphers, validate_certs=self.validate_certs)
except NotImplementedError:
# We'll make do with no context below
pass
try:
if use_proxy and https_proxy:
proxy_parts = generic_urlparse(urlparse(https_proxy))
port = proxy_parts.get('port') or 443
proxy_hostname = proxy_parts.get('hostname', None)
if proxy_hostname is None or proxy_parts.get('scheme') == '':
raise ProxyError("Failed to parse https_proxy environment variable."
" Please make sure you export https proxy as 'https_proxy=<SCHEME>://<IP_ADDRESS>:<PORT>'")
s = socket.create_connection((proxy_hostname, port))
if proxy_parts.get('scheme') == 'http':
s.sendall(to_bytes(self.CONNECT_COMMAND % (self.hostname, self.port), errors='surrogate_or_strict'))
if proxy_parts.get('username'):
credentials = "%s:%s" % (proxy_parts.get('username', ''), proxy_parts.get('password', ''))
s.sendall(b'Proxy-Authorization: Basic %s\r\n' % base64.b64encode(to_bytes(credentials, errors='surrogate_or_strict')).strip())
s.sendall(b'\r\n')
connect_result = b""
while connect_result.find(b"\r\n\r\n") <= 0:
connect_result += s.recv(4096)
# 128 kilobytes of headers should be enough for everyone.
if len(connect_result) > 131072:
raise ProxyError('Proxy sent too verbose headers. Only 128KiB allowed.')
self.validate_proxy_response(connect_result)
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
else:
raise ProxyError('Unsupported proxy scheme: %s. Currently ansible only supports HTTP proxies.' % proxy_parts.get('scheme'))
else:
s = socket.create_connection((self.hostname, self.port))
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
elif HAS_URLLIB3_SSL_WRAP_SOCKET:
ssl_s = ssl_wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
# close the ssl connection
# ssl_s.unwrap()
s.close()
except (ssl.SSLError, CertificateError) as e:
build_ssl_validation_error(self.hostname, self.port, paths_checked, e)
except socket.error as e:
raise ConnectionError('Failed to connect to %s at port %s: %s' % (self.hostname, self.port, to_native(e)))
return req
https_request = http_request
def maybe_add_ssl_handler(url, validate_certs, ca_path=None, ciphers=None):
parsed = generic_urlparse(urlparse(url))
if parsed.scheme == 'https' and validate_certs:
if not HAS_SSL:
raise NoSSLError('SSL validation is not available in your version of python. You can use validate_certs=False,'
' however this is unsafe and not recommended')
# create the SSL validation handler
return SSLValidationHandler(parsed.hostname, parsed.port or 443, ca_path=ca_path, ciphers=ciphers, validate_certs=validate_certs)
def getpeercert(response, binary_form=False):
""" Attempt to get the peer certificate of the response from urlopen. """
# The response from urllib2.open() is different across Python 2 and 3
if PY3:
socket = response.fp.raw._sock
else:
socket = response.fp._sock.fp._sock
try:
return socket.getpeercert(binary_form)
except AttributeError:
pass # Not HTTPS
def get_channel_binding_cert_hash(certificate_der):
""" Gets the channel binding app data for a TLS connection using the peer cert. """
if not HAS_CRYPTOGRAPHY:
return
# Logic documented in RFC 5929 section 4 https://tools.ietf.org/html/rfc5929#section-4
cert = x509.load_der_x509_certificate(certificate_der, default_backend())
hash_algorithm = None
try:
hash_algorithm = cert.signature_hash_algorithm
except UnsupportedAlgorithm:
pass
# If the signature hash algorithm is unknown/unsupported or md5/sha1 we must use SHA256.
if not hash_algorithm or hash_algorithm.name in ['md5', 'sha1']:
hash_algorithm = hashes.SHA256()
digest = hashes.Hash(hash_algorithm, default_backend())
digest.update(certificate_der)
return digest.finalize()
def rfc2822_date_string(timetuple, zone='-0000'):
"""Accepts a timetuple and optional zone which defaults to ``-0000``
and returns a date string as specified by RFC 2822, e.g.:
Fri, 09 Nov 2001 01:08:47 -0000
Copied from email.utils.formatdate and modified for separate use
"""
return '%s, %02d %s %04d %02d:%02d:%02d %s' % (
['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]],
timetuple[2],
['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1],
timetuple[0], timetuple[3], timetuple[4], timetuple[5],
zone)
class Request:
def __init__(self, headers=None, use_proxy=True, force=False, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None, force_basic_auth=False,
follow_redirects='urllib2', client_cert=None, client_key=None, cookies=None, unix_socket=None,
ca_path=None, unredirected_headers=None, decompress=True, ciphers=None, use_netrc=True):
"""This class works somewhat similarly to the ``Session`` class of from requests
by defining a cookiejar that can be used across requests as well as cascaded defaults that
can apply to repeated requests
For documentation of params, see ``Request.open``
>>> from ansible.module_utils.urls import Request
>>> r = Request()
>>> r.open('GET', 'http://httpbin.org/cookies/set?k1=v1').read()
'{\n "cookies": {\n "k1": "v1"\n }\n}\n'
>>> r = Request(url_username='user', url_password='passwd')
>>> r.open('GET', 'http://httpbin.org/basic-auth/user/passwd').read()
'{\n "authenticated": true, \n "user": "user"\n}\n'
>>> r = Request(headers=dict(foo='bar'))
>>> r.open('GET', 'http://httpbin.org/get', headers=dict(baz='qux')).read()
"""
self.headers = headers or {}
if not isinstance(self.headers, dict):
raise ValueError("headers must be a dict: %r" % self.headers)
self.use_proxy = use_proxy
self.force = force
self.timeout = timeout
self.validate_certs = validate_certs
self.url_username = url_username
self.url_password = url_password
self.http_agent = http_agent
self.force_basic_auth = force_basic_auth
self.follow_redirects = follow_redirects
self.client_cert = client_cert
self.client_key = client_key
self.unix_socket = unix_socket
self.ca_path = ca_path
self.unredirected_headers = unredirected_headers
self.decompress = decompress
self.ciphers = ciphers
self.use_netrc = use_netrc
if isinstance(cookies, cookiejar.CookieJar):
self.cookies = cookies
else:
self.cookies = cookiejar.CookieJar()
def _fallback(self, value, fallback):
if value is None:
return fallback
return value
def open(self, method, url, data=None, headers=None, use_proxy=None,
force=None, last_mod_time=None, timeout=None, validate_certs=None,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=None, follow_redirects=None,
client_cert=None, client_key=None, cookies=None, use_gssapi=False,
unix_socket=None, ca_path=None, unredirected_headers=None, decompress=None,
ciphers=None, use_netrc=None):
"""
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
Returns :class:`HTTPResponse` object.
:arg method: method for the request
:arg url: URL to request
:kwarg data: (optional) bytes, or file-like object to send
in the body of the request
:kwarg headers: (optional) Dictionary of HTTP Headers to send with the
request
:kwarg use_proxy: (optional) Boolean of whether or not to use proxy
:kwarg force: (optional) Boolean of whether or not to set `cache-control: no-cache` header
:kwarg last_mod_time: (optional) Datetime object to use when setting If-Modified-Since header
:kwarg timeout: (optional) How long to wait for the server to send
data before giving up, as a float
:kwarg validate_certs: (optional) Booleani that controls whether we verify
the server's TLS certificate
:kwarg url_username: (optional) String of the user to use when authenticating
:kwarg url_password: (optional) String of the password to use when authenticating
:kwarg http_agent: (optional) String of the User-Agent to use in the request
:kwarg force_basic_auth: (optional) Boolean determining if auth header should be sent in the initial request
:kwarg follow_redirects: (optional) String of urllib2, all/yes, safe, none to determine how redirects are
followed, see RedirectHandlerFactory for more information
:kwarg client_cert: (optional) PEM formatted certificate chain file to be used for SSL client authentication.
This file can also include the key as well, and if the key is included, client_key is not required
:kwarg client_key: (optional) PEM formatted file that contains your private key to be used for SSL client
authentication. If client_cert contains both the certificate and key, this option is not required
:kwarg cookies: (optional) CookieJar object to send with the
request
:kwarg use_gssapi: (optional) Use GSSAPI handler of requests.
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg ciphers: (optional) List of ciphers to use
:kwarg use_netrc: (optional) Boolean determining whether to use credentials from ~/.netrc file
:returns: HTTPResponse. Added in Ansible 2.9
"""
method = method.upper()
if headers is None:
headers = {}
elif not isinstance(headers, dict):
raise ValueError("headers must be a dict")
headers = dict(self.headers, **headers)
use_proxy = self._fallback(use_proxy, self.use_proxy)
force = self._fallback(force, self.force)
timeout = self._fallback(timeout, self.timeout)
validate_certs = self._fallback(validate_certs, self.validate_certs)
url_username = self._fallback(url_username, self.url_username)
url_password = self._fallback(url_password, self.url_password)
http_agent = self._fallback(http_agent, self.http_agent)
force_basic_auth = self._fallback(force_basic_auth, self.force_basic_auth)
follow_redirects = self._fallback(follow_redirects, self.follow_redirects)
client_cert = self._fallback(client_cert, self.client_cert)
client_key = self._fallback(client_key, self.client_key)
cookies = self._fallback(cookies, self.cookies)
unix_socket = self._fallback(unix_socket, self.unix_socket)
ca_path = self._fallback(ca_path, self.ca_path)
unredirected_headers = self._fallback(unredirected_headers, self.unredirected_headers)
decompress = self._fallback(decompress, self.decompress)
ciphers = self._fallback(ciphers, self.ciphers)
use_netrc = self._fallback(use_netrc, self.use_netrc)
handlers = []
if unix_socket:
handlers.append(UnixHTTPHandler(unix_socket))
parsed = generic_urlparse(urlparse(url))
if parsed.scheme != 'ftp':
username = url_username
password = url_password
if username:
netloc = parsed.netloc
elif '@' in parsed.netloc:
credentials, netloc = parsed.netloc.split('@', 1)
if ':' in credentials:
username, password = credentials.split(':', 1)
else:
username = credentials
password = ''
parsed_list = parsed.as_list()
parsed_list[1] = netloc
# reconstruct url without credentials
url = urlunparse(parsed_list)
if use_gssapi:
if HTTPGSSAPIAuthHandler: # type: ignore[truthy-function]
handlers.append(HTTPGSSAPIAuthHandler(username, password))
else:
imp_err_msg = missing_required_lib('gssapi', reason='for use_gssapi=True',
url='https://pypi.org/project/gssapi/')
raise MissingModuleError(imp_err_msg, import_traceback=GSSAPI_IMP_ERR)
elif username and not force_basic_auth:
passman = urllib_request.HTTPPasswordMgrWithDefaultRealm()
# this creates a password manager
passman.add_password(None, netloc, username, password)
# because we have put None at the start it will always
# use this username/password combination for urls
# for which `theurl` is a super-url
authhandler = urllib_request.HTTPBasicAuthHandler(passman)
digest_authhandler = urllib_request.HTTPDigestAuthHandler(passman)
# create the AuthHandler
handlers.append(authhandler)
handlers.append(digest_authhandler)
elif username and force_basic_auth:
headers["Authorization"] = basic_auth_header(username, password)
elif use_netrc:
try:
rc = netrc.netrc(os.environ.get('NETRC'))
login = rc.authenticators(parsed.hostname)
except IOError:
login = None
if login:
username, _, password = login
if username and password:
headers["Authorization"] = basic_auth_header(username, password)
if not use_proxy:
proxyhandler = urllib_request.ProxyHandler({})
handlers.append(proxyhandler)
if not any((HAS_SSLCONTEXT, HAS_URLLIB3_PYOPENSSLCONTEXT)):
ssl_handler = maybe_add_ssl_handler(url, validate_certs, ca_path=ca_path, ciphers=ciphers)
if ssl_handler:
handlers.append(ssl_handler)
else:
tmp_ca_path, cadata, paths_checked = get_ca_certs(ca_path)
context = make_context(
cafile=tmp_ca_path,
cadata=cadata,
ciphers=ciphers,
validate_certs=validate_certs,
)
handlers.append(HTTPSClientAuthHandler(client_cert=client_cert,
client_key=client_key,
unix_socket=unix_socket,
context=context))
handlers.append(RedirectHandlerFactory(follow_redirects, validate_certs, ca_path=ca_path, ciphers=ciphers))
# add some nicer cookie handling
if cookies is not None:
handlers.append(urllib_request.HTTPCookieProcessor(cookies))
opener = urllib_request.build_opener(*handlers)
urllib_request.install_opener(opener)
data = to_bytes(data, nonstring='passthru')
request = RequestWithMethod(url, method, data)
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
if http_agent:
request.add_header('User-agent', http_agent)
# Cache control
# Either we directly force a cache refresh
if force:
request.add_header('cache-control', 'no-cache')
# or we do it if the original is more recent than our copy
elif last_mod_time:
tstamp = rfc2822_date_string(last_mod_time.timetuple(), 'GMT')
request.add_header('If-Modified-Since', tstamp)
# user defined headers now, which may override things we've set above
unredirected_headers = [h.lower() for h in (unredirected_headers or [])]
for header in headers:
if header.lower() in unredirected_headers:
request.add_unredirected_header(header, headers[header])
else:
request.add_header(header, headers[header])
r = urllib_request.urlopen(request, None, timeout)
if decompress and r.headers.get('content-encoding', '').lower() == 'gzip':
fp = GzipDecodedReader(r.fp)
if PY3:
r.fp = fp
# Content-Length does not match gzip decoded length
# Prevent ``r.read`` from stopping at Content-Length
r.length = None
else:
# Py2 maps ``r.read`` to ``fp.read``, create new ``addinfourl``
# object to compensate
msg = r.msg
r = urllib_request.addinfourl(
fp,
r.info(),
r.geturl(),
r.getcode()
)
r.msg = msg
return r
def get(self, url, **kwargs):
r"""Sends a GET request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('GET', url, **kwargs)
def options(self, url, **kwargs):
r"""Sends a OPTIONS request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
r"""Sends a HEAD request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('HEAD', url, **kwargs)
def post(self, url, data=None, **kwargs):
r"""Sends a POST request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('POST', url, data=data, **kwargs)
def put(self, url, data=None, **kwargs):
r"""Sends a PUT request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
r"""Sends a PATCH request. Returns :class:`HTTPResponse` object.
:arg url: URL to request.
:kwarg data: (optional) bytes, or file-like object to send in the body of the request.
:kwarg \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
r"""Sends a DELETE request. Returns :class:`HTTPResponse` object.
:arg url: URL to request
:kwargs \*\*kwargs: Optional arguments that ``open`` takes.
:returns: HTTPResponse
"""
return self.open('DELETE', url, **kwargs)
def open_url(url, data=None, headers=None, method=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2',
client_cert=None, client_key=None, cookies=None,
use_gssapi=False, unix_socket=None, ca_path=None,
unredirected_headers=None, decompress=True, ciphers=None, use_netrc=True):
'''
Sends a request via HTTP(S) or FTP using urllib2 (Python2) or urllib (Python3)
Does not require the module environment
'''
method = method or ('POST' if data else 'GET')
return Request().open(method, url, data=data, headers=headers, use_proxy=use_proxy,
force=force, last_mod_time=last_mod_time, timeout=timeout, validate_certs=validate_certs,
url_username=url_username, url_password=url_password, http_agent=http_agent,
force_basic_auth=force_basic_auth, follow_redirects=follow_redirects,
client_cert=client_cert, client_key=client_key, cookies=cookies,
use_gssapi=use_gssapi, unix_socket=unix_socket, ca_path=ca_path,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
def prepare_multipart(fields):
"""Takes a mapping, and prepares a multipart/form-data body
:arg fields: Mapping
:returns: tuple of (content_type, body) where ``content_type`` is
the ``multipart/form-data`` ``Content-Type`` header including
``boundary`` and ``body`` is the prepared bytestring body
Payload content from a file will be base64 encoded and will include
the appropriate ``Content-Transfer-Encoding`` and ``Content-Type``
headers.
Example:
{
"file1": {
"filename": "/bin/true",
"mime_type": "application/octet-stream"
},
"file2": {
"content": "text based file content",
"filename": "fake.txt",
"mime_type": "text/plain",
},
"text_form_field": "value"
}
"""
if not isinstance(fields, Mapping):
raise TypeError(
'Mapping is required, cannot be type %s' % fields.__class__.__name__
)
m = email.mime.multipart.MIMEMultipart('form-data')
for field, value in sorted(fields.items()):
if isinstance(value, string_types):
main_type = 'text'
sub_type = 'plain'
content = value
filename = None
elif isinstance(value, Mapping):
filename = value.get('filename')
content = value.get('content')
if not any((filename, content)):
raise ValueError('at least one of filename or content must be provided')
mime = value.get('mime_type')
if not mime:
try:
mime = mimetypes.guess_type(filename or '', strict=False)[0] or 'application/octet-stream'
except Exception:
mime = 'application/octet-stream'
main_type, sep, sub_type = mime.partition('/')
else:
raise TypeError(
'value must be a string, or mapping, cannot be type %s' % value.__class__.__name__
)
if not content and filename:
with open(to_bytes(filename, errors='surrogate_or_strict'), 'rb') as f:
part = email.mime.application.MIMEApplication(f.read())
del part['Content-Type']
part.add_header('Content-Type', '%s/%s' % (main_type, sub_type))
else:
part = email.mime.nonmultipart.MIMENonMultipart(main_type, sub_type)
part.set_payload(to_bytes(content))
part.add_header('Content-Disposition', 'form-data')
del part['MIME-Version']
part.set_param(
'name',
field,
header='Content-Disposition'
)
if filename:
part.set_param(
'filename',
to_native(os.path.basename(filename)),
header='Content-Disposition'
)
m.attach(part)
if PY3:
# Ensure headers are not split over multiple lines
# The HTTP policy also uses CRLF by default
b_data = m.as_bytes(policy=email.policy.HTTP)
else:
# Py2
# We cannot just call ``as_string`` since it provides no way
# to specify ``maxheaderlen``
fp = cStringIO() # cStringIO seems to be required here
# Ensure headers are not split over multiple lines
g = email.generator.Generator(fp, maxheaderlen=0)
g.flatten(m)
# ``fix_eols`` switches from ``\n`` to ``\r\n``
b_data = email.utils.fix_eols(fp.getvalue())
del m
headers, sep, b_content = b_data.partition(b'\r\n\r\n')
del b_data
if PY3:
parser = email.parser.BytesHeaderParser().parsebytes
else:
# Py2
parser = email.parser.HeaderParser().parsestr
return (
parser(headers)['content-type'], # Message converts to native strings
b_content
)
#
# Module-related functions
#
def basic_auth_header(username, password):
"""Takes a username and password and returns a byte string suitable for
using as value of an Authorization header to do basic auth.
"""
if password is None:
password = ''
return b"Basic %s" % base64.b64encode(to_bytes("%s:%s" % (username, password), errors='surrogate_or_strict'))
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url=dict(type='str'),
force=dict(type='bool', default=False),
http_agent=dict(type='str', default='ansible-httpget'),
use_proxy=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
url_username=dict(type='str'),
url_password=dict(type='str', no_log=True),
force_basic_auth=dict(type='bool', default=False),
client_cert=dict(type='path'),
client_key=dict(type='path'),
use_gssapi=dict(type='bool', default=False),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=None, force=False, last_mod_time=None, timeout=10,
use_gssapi=False, unix_socket=None, ca_path=None, cookies=None, unredirected_headers=None,
decompress=True, ciphers=None, use_netrc=True):
"""Sends a request via HTTP(S) or FTP (needs the module as parameter)
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg use_proxy: (optional) whether or not to use proxy (Default: True)
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg boolean use_gssapi: Default: False
:kwarg unix_socket: (optional) String of file system path to unix socket file to use when establishing
connection to the provided url
:kwarg ca_path: (optional) String of file system path to CA cert bundle to use
:kwarg cookies: (optional) CookieJar object to send with the request
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg cipher: (optional) List of ciphers to use
:kwarg boolean use_netrc: (optional) If False: Ignores login and password in ~/.netrc file (Default: True)
:returns: A tuple of (**response**, **info**). Use ``response.read()`` to read the data.
The **info** contains the 'status' and other meta data. When a HttpError (status >= 400)
occurred then ``info['body']`` contains the error response data::
Example::
data={...}
resp, info = fetch_url(module,
"http://example.com",
data=module.jsonify(data),
headers={'Content-type': 'application/json'},
method="POST")
status_code = info["status"]
body = resp.read()
if status_code >= 400 :
body = info['body']
"""
if not HAS_URLPARSE:
module.fail_json(msg='urlparse is not installed')
if not HAS_GZIP:
module.fail_json(msg=GzipDecodedReader.missing_gzip_error())
# ensure we use proper tempdir
old_tempdir = tempfile.tempdir
tempfile.tempdir = module.tmpdir
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
if use_proxy is None:
use_proxy = module.params.get('use_proxy', True)
username = module.params.get('url_username', '')
password = module.params.get('url_password', '')
http_agent = module.params.get('http_agent', 'ansible-httpget')
force_basic_auth = module.params.get('force_basic_auth', '')
follow_redirects = module.params.get('follow_redirects', 'urllib2')
client_cert = module.params.get('client_cert')
client_key = module.params.get('client_key')
use_gssapi = module.params.get('use_gssapi', use_gssapi)
if not isinstance(cookies, cookiejar.CookieJar):
cookies = cookiejar.LWPCookieJar()
r = None
info = dict(url=url, status=-1)
try:
r = open_url(url, data=data, headers=headers, method=method,
use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=username,
url_password=password, http_agent=http_agent, force_basic_auth=force_basic_auth,
follow_redirects=follow_redirects, client_cert=client_cert,
client_key=client_key, cookies=cookies, use_gssapi=use_gssapi,
unix_socket=unix_socket, ca_path=ca_path, unredirected_headers=unredirected_headers,
decompress=decompress, ciphers=ciphers, use_netrc=use_netrc)
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in r.info().items()))
# Don't be lossy, append header values for duplicate headers
# In Py2 there is nothing that needs done, py2 does this for us
if PY3:
temp_headers = {}
for name, value in r.headers.items():
# The same as above, lower case keys to match py2 behavior, and create more consistent results
name = name.lower()
if name in temp_headers:
temp_headers[name] = ', '.join((temp_headers[name], value))
else:
temp_headers[name] = value
info.update(temp_headers)
# parse the cookies into a nice dictionary
cookie_list = []
cookie_dict = dict()
# Python sorts cookies in order of most specific (ie. longest) path first. See ``CookieJar._cookie_attrs``
# Cookies with the same path are reversed from response order.
# This code makes no assumptions about that, and accepts the order given by python
for cookie in cookies:
cookie_dict[cookie.name] = cookie.value
cookie_list.append((cookie.name, cookie.value))
info['cookies_string'] = '; '.join('%s=%s' % c for c in cookie_list)
info['cookies'] = cookie_dict
# finally update the result with a message about the fetch
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), url=r.geturl(), status=r.code))
except NoSSLError as e:
distribution = get_distribution()
if distribution is not None and distribution.lower() == 'redhat':
module.fail_json(msg='%s. You can also install python-ssl from EPEL' % to_native(e), **info)
else:
module.fail_json(msg='%s' % to_native(e), **info)
except (ConnectionError, ValueError) as e:
module.fail_json(msg=to_native(e), **info)
except MissingModuleError as e:
module.fail_json(msg=to_text(e), exception=e.import_traceback)
except urllib_error.HTTPError as e:
r = e
try:
if e.fp is None:
# Certain HTTPError objects may not have the ability to call ``.read()`` on Python 3
# This is not handled gracefully in Python 3, and instead an exception is raised from
# tempfile, due to ``urllib.response.addinfourl`` not being initialized
raise AttributeError
body = e.read()
except AttributeError:
body = ''
else:
e.close()
# Try to add exception info to the output but don't fail if we can't
try:
# Lowercase keys, to conform to py2 behavior, so that py3 and py2 are predictable
info.update(dict((k.lower(), v) for k, v in e.info().items()))
except Exception:
pass
info.update({'msg': to_native(e), 'body': body, 'status': e.code})
except urllib_error.URLError as e:
code = int(getattr(e, 'code', -1))
info.update(dict(msg="Request failed: %s" % to_native(e), status=code))
except socket.error as e:
info.update(dict(msg="Connection failure: %s" % to_native(e), status=-1))
except httplib.BadStatusLine as e:
info.update(dict(msg="Connection failure: connection was closed before a valid response was received: %s" % to_native(e.line), status=-1))
except Exception as e:
info.update(dict(msg="An unknown error occurred: %s" % to_native(e), status=-1),
exception=traceback.format_exc())
finally:
tempfile.tempdir = old_tempdir
return r, info
def _suffixes(name):
"""A list of the final component's suffixes, if any."""
if name.endswith('.'):
return []
name = name.lstrip('.')
return ['.' + s for s in name.split('.')[1:]]
def _split_multiext(name, min=3, max=4, count=2):
"""Split a multi-part extension from a file name.
Returns '([name minus extension], extension)'.
Define the valid extension length (including the '.') with 'min' and 'max',
'count' sets the number of extensions, counting from the end, to evaluate.
Evaluation stops on the first file extension that is outside the min and max range.
If no valid extensions are found, the original ``name`` is returned
and ``extension`` is empty.
:arg name: File name or path.
:kwarg min: Minimum length of a valid file extension.
:kwarg max: Maximum length of a valid file extension.
:kwarg count: Number of suffixes from the end to evaluate.
"""
extension = ''
for i, sfx in enumerate(reversed(_suffixes(name))):
if i >= count:
break
if min <= len(sfx) <= max:
extension = '%s%s' % (sfx, extension)
name = name.rstrip(sfx)
else:
# Stop on the first invalid extension
break
return name, extension
def fetch_file(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10,
unredirected_headers=None, decompress=True, ciphers=None):
'''Download and save a file via HTTP(S) or FTP (needs the module as parameter).
This is basically a wrapper around fetch_url().
:arg module: The AnsibleModule (used to get username, password etc. (s.b.).
:arg url: The url to use.
:kwarg data: The data to be sent (in case of POST/PUT).
:kwarg headers: A dict with the request headers.
:kwarg method: "POST", "PUT", etc.
:kwarg boolean use_proxy: Default: True
:kwarg boolean force: If True: Do not get a cached copy (Default: False)
:kwarg last_mod_time: Default: None
:kwarg int timeout: Default: 10
:kwarg unredirected_headers: (optional) A list of headers to not attach on a redirected request
:kwarg decompress: (optional) Whether to attempt to decompress gzip content-encoded responses
:kwarg ciphers: (optional) List of ciphers to use
:returns: A string, the path to the downloaded file.
'''
# download file
bufsize = 65536
parts = urlparse(url)
file_prefix, file_ext = _split_multiext(os.path.basename(parts.path), count=2)
fetch_temp_file = tempfile.NamedTemporaryFile(dir=module.tmpdir, prefix=file_prefix, suffix=file_ext, delete=False)
module.add_cleanup_file(fetch_temp_file.name)
try:
rsp, info = fetch_url(module, url, data, headers, method, use_proxy, force, last_mod_time, timeout,
unredirected_headers=unredirected_headers, decompress=decompress, ciphers=ciphers)
if not rsp:
module.fail_json(msg="Failure downloading %s, %s" % (url, info['msg']))
data = rsp.read(bufsize)
while data:
fetch_temp_file.write(data)
data = rsp.read(bufsize)
fetch_temp_file.close()
except Exception as e:
module.fail_json(msg="Failure downloading %s, %s" % (url, to_native(e)))
return fetch_temp_file.name
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,490 |
Fix use of deprecated parameters in `module_utils/urls.py`
|
### Summary
In Python 3.12 the deprecated `key_file`, `cert_file` and `check_hostname` parameters have been [removed](https://docs.python.org/3.12/library/http.client.html#http.client.HTTPSConnection).
There is code which still attempts to set these, such as:
https://github.com/ansible/ansible/blob/0371ea08d6de55635ffcbf94da5ddec0cd809495/lib/ansible/module_utils/urls.py#L604-L608
Which results in an error under Python 3.12:
```
> return httplib.HTTPSConnection(host, **kwargs)
E TypeError: HTTPSConnection.__init__() got an unexpected keyword argument 'cert_file'
```
### Issue Type
Feature Idea
### Component Name
module_utils/urls.py
|
https://github.com/ansible/ansible/issues/80490
|
https://github.com/ansible/ansible/pull/80751
|
b16041f1a91bb74b7adbf2ad1f1af25603151cb3
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
| 2023-04-12T02:28:32Z |
python
| 2023-05-17T22:17:25Z |
test/units/module_utils/urls/test_Request.py
|
# -*- coding: utf-8 -*-
# (c) 2018 Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import os
from ansible.module_utils.urls import (Request, open_url, urllib_request, HAS_SSLCONTEXT, cookiejar, RequestWithMethod,
UnixHTTPHandler, UnixHTTPSConnection, httplib)
from ansible.module_utils.urls import SSLValidationHandler, HTTPSClientAuthHandler, RedirectHandlerFactory
import pytest
from units.compat.mock import call
if HAS_SSLCONTEXT:
import ssl
@pytest.fixture
def urlopen_mock(mocker):
return mocker.patch('ansible.module_utils.urls.urllib_request.urlopen')
@pytest.fixture
def install_opener_mock(mocker):
return mocker.patch('ansible.module_utils.urls.urllib_request.install_opener')
def test_Request_fallback(urlopen_mock, install_opener_mock, mocker):
here = os.path.dirname(__file__)
pem = os.path.join(here, 'fixtures/client.pem')
cookies = cookiejar.CookieJar()
request = Request(
headers={'foo': 'bar'},
use_proxy=False,
force=True,
timeout=100,
validate_certs=False,
url_username='user',
url_password='passwd',
http_agent='ansible-tests',
force_basic_auth=True,
follow_redirects='all',
client_cert='/tmp/client.pem',
client_key='/tmp/client.key',
cookies=cookies,
unix_socket='/foo/bar/baz.sock',
ca_path=pem,
ciphers=['ECDHE-RSA-AES128-SHA256'],
use_netrc=True,
)
fallback_mock = mocker.spy(request, '_fallback')
r = request.open('GET', 'https://ansible.com')
calls = [
call(None, False), # use_proxy
call(None, True), # force
call(None, 100), # timeout
call(None, False), # validate_certs
call(None, 'user'), # url_username
call(None, 'passwd'), # url_password
call(None, 'ansible-tests'), # http_agent
call(None, True), # force_basic_auth
call(None, 'all'), # follow_redirects
call(None, '/tmp/client.pem'), # client_cert
call(None, '/tmp/client.key'), # client_key
call(None, cookies), # cookies
call(None, '/foo/bar/baz.sock'), # unix_socket
call(None, pem), # ca_path
call(None, None), # unredirected_headers
call(None, True), # auto_decompress
call(None, ['ECDHE-RSA-AES128-SHA256']), # ciphers
call(None, True), # use_netrc
]
fallback_mock.assert_has_calls(calls)
assert fallback_mock.call_count == 18 # All but headers use fallback
args = urlopen_mock.call_args[0]
assert args[1] is None # data, this is handled in the Request not urlopen
assert args[2] == 100 # timeout
req = args[0]
assert req.headers == {
'Authorization': b'Basic dXNlcjpwYXNzd2Q=',
'Cache-control': 'no-cache',
'Foo': 'bar',
'User-agent': 'ansible-tests'
}
assert req.data is None
assert req.get_method() == 'GET'
def test_Request_open(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/')
args = urlopen_mock.call_args[0]
assert args[1] is None # data, this is handled in the Request not urlopen
assert args[2] == 10 # timeout
req = args[0]
assert req.headers == {}
assert req.data is None
assert req.get_method() == 'GET'
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
if not HAS_SSLCONTEXT:
expected_handlers = (
SSLValidationHandler,
RedirectHandlerFactory(), # factory, get handler
)
else:
expected_handlers = (
RedirectHandlerFactory(), # factory, get handler
)
found_handlers = []
for handler in handlers:
if isinstance(handler, SSLValidationHandler) or handler.__class__.__name__ == 'RedirectHandler':
found_handlers.append(handler)
assert len(found_handlers) == len(expected_handlers)
def test_Request_open_http(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://ansible.com/')
args = urlopen_mock.call_args[0]
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
found_handlers = []
for handler in handlers:
if isinstance(handler, SSLValidationHandler):
found_handlers.append(handler)
assert len(found_handlers) == 0
def test_Request_open_unix_socket(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://ansible.com/', unix_socket='/foo/bar/baz.sock')
args = urlopen_mock.call_args[0]
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
found_handlers = []
for handler in handlers:
if isinstance(handler, UnixHTTPHandler):
found_handlers.append(handler)
assert len(found_handlers) == 1
def test_Request_open_https_unix_socket(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/', unix_socket='/foo/bar/baz.sock')
args = urlopen_mock.call_args[0]
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
found_handlers = []
for handler in handlers:
if isinstance(handler, HTTPSClientAuthHandler):
found_handlers.append(handler)
assert len(found_handlers) == 1
inst = found_handlers[0]._build_https_connection('foo')
assert isinstance(inst, UnixHTTPSConnection)
def test_Request_open_ftp(urlopen_mock, install_opener_mock, mocker):
mocker.patch('ansible.module_utils.urls.ParseResultDottedDict.as_list', side_effect=AssertionError)
# Using ftp scheme should prevent the AssertionError side effect to fire
r = Request().open('GET', 'ftp://[email protected]/')
def test_Request_open_headers(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://ansible.com/', headers={'Foo': 'bar'})
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers == {'Foo': 'bar'}
def test_Request_open_username(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://ansible.com/', url_username='user')
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
expected_handlers = (
urllib_request.HTTPBasicAuthHandler,
urllib_request.HTTPDigestAuthHandler,
)
found_handlers = []
for handler in handlers:
if isinstance(handler, expected_handlers):
found_handlers.append(handler)
assert len(found_handlers) == 2
assert found_handlers[0].passwd.passwd[None] == {(('ansible.com', '/'),): ('user', None)}
def test_Request_open_username_in_url(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://[email protected]/')
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
expected_handlers = (
urllib_request.HTTPBasicAuthHandler,
urllib_request.HTTPDigestAuthHandler,
)
found_handlers = []
for handler in handlers:
if isinstance(handler, expected_handlers):
found_handlers.append(handler)
assert found_handlers[0].passwd.passwd[None] == {(('ansible.com', '/'),): ('user2', '')}
def test_Request_open_username_force_basic(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://ansible.com/', url_username='user', url_password='passwd', force_basic_auth=True)
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
expected_handlers = (
urllib_request.HTTPBasicAuthHandler,
urllib_request.HTTPDigestAuthHandler,
)
found_handlers = []
for handler in handlers:
if isinstance(handler, expected_handlers):
found_handlers.append(handler)
assert len(found_handlers) == 0
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers.get('Authorization') == b'Basic dXNlcjpwYXNzd2Q='
def test_Request_open_auth_in_netloc(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'http://user:[email protected]/')
args = urlopen_mock.call_args[0]
req = args[0]
assert req.get_full_url() == 'http://ansible.com/'
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
expected_handlers = (
urllib_request.HTTPBasicAuthHandler,
urllib_request.HTTPDigestAuthHandler,
)
found_handlers = []
for handler in handlers:
if isinstance(handler, expected_handlers):
found_handlers.append(handler)
assert len(found_handlers) == 2
def test_Request_open_netrc(urlopen_mock, install_opener_mock, monkeypatch):
here = os.path.dirname(__file__)
monkeypatch.setenv('NETRC', os.path.join(here, 'fixtures/netrc'))
r = Request().open('GET', 'http://ansible.com/')
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers.get('Authorization') == b'Basic dXNlcjpwYXNzd2Q='
r = Request().open('GET', 'http://foo.ansible.com/')
args = urlopen_mock.call_args[0]
req = args[0]
assert 'Authorization' not in req.headers
monkeypatch.setenv('NETRC', os.path.join(here, 'fixtures/netrc.nonexistant'))
r = Request().open('GET', 'http://ansible.com/')
args = urlopen_mock.call_args[0]
req = args[0]
assert 'Authorization' not in req.headers
def test_Request_open_no_proxy(urlopen_mock, install_opener_mock, mocker):
build_opener_mock = mocker.patch('ansible.module_utils.urls.urllib_request.build_opener')
r = Request().open('GET', 'http://ansible.com/', use_proxy=False)
handlers = build_opener_mock.call_args[0]
found_handlers = []
for handler in handlers:
if isinstance(handler, urllib_request.ProxyHandler):
found_handlers.append(handler)
assert len(found_handlers) == 1
@pytest.mark.skipif(not HAS_SSLCONTEXT, reason="requires SSLContext")
def test_Request_open_no_validate_certs(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/', validate_certs=False)
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
ssl_handler = None
for handler in handlers:
if isinstance(handler, HTTPSClientAuthHandler):
ssl_handler = handler
break
assert ssl_handler is not None
inst = ssl_handler._build_https_connection('foo')
assert isinstance(inst, httplib.HTTPSConnection)
context = ssl_handler._context
# Differs by Python version
# assert context.protocol == ssl.PROTOCOL_SSLv23
if ssl.OP_NO_SSLv2:
assert context.options & ssl.OP_NO_SSLv2
assert context.options & ssl.OP_NO_SSLv3
assert context.verify_mode == ssl.CERT_NONE
assert context.check_hostname is False
def test_Request_open_client_cert(urlopen_mock, install_opener_mock):
here = os.path.dirname(__file__)
client_cert = os.path.join(here, 'fixtures/client.pem')
client_key = os.path.join(here, 'fixtures/client.key')
r = Request().open('GET', 'https://ansible.com/', client_cert=client_cert, client_key=client_key)
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
ssl_handler = None
for handler in handlers:
if isinstance(handler, HTTPSClientAuthHandler):
ssl_handler = handler
break
assert ssl_handler is not None
assert ssl_handler.client_cert == client_cert
assert ssl_handler.client_key == client_key
https_connection = ssl_handler._build_https_connection('ansible.com')
assert https_connection.key_file == client_key
assert https_connection.cert_file == client_cert
def test_Request_open_cookies(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/', cookies=cookiejar.CookieJar())
opener = install_opener_mock.call_args[0][0]
handlers = opener.handlers
cookies_handler = None
for handler in handlers:
if isinstance(handler, urllib_request.HTTPCookieProcessor):
cookies_handler = handler
break
assert cookies_handler is not None
def test_Request_open_invalid_method(urlopen_mock, install_opener_mock):
r = Request().open('UNKNOWN', 'https://ansible.com/')
args = urlopen_mock.call_args[0]
req = args[0]
assert req.data is None
assert req.get_method() == 'UNKNOWN'
# assert r.status == 504
def test_Request_open_custom_method(urlopen_mock, install_opener_mock):
r = Request().open('DELETE', 'https://ansible.com/')
args = urlopen_mock.call_args[0]
req = args[0]
assert isinstance(req, RequestWithMethod)
def test_Request_open_user_agent(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/', http_agent='ansible-tests')
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers.get('User-agent') == 'ansible-tests'
def test_Request_open_force(urlopen_mock, install_opener_mock):
r = Request().open('GET', 'https://ansible.com/', force=True, last_mod_time=datetime.datetime.now())
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers.get('Cache-control') == 'no-cache'
assert 'If-modified-since' not in req.headers
def test_Request_open_last_mod(urlopen_mock, install_opener_mock):
now = datetime.datetime.now()
r = Request().open('GET', 'https://ansible.com/', last_mod_time=now)
args = urlopen_mock.call_args[0]
req = args[0]
assert req.headers.get('If-modified-since') == now.strftime('%a, %d %b %Y %H:%M:%S GMT')
def test_Request_open_headers_not_dict(urlopen_mock, install_opener_mock):
with pytest.raises(ValueError):
Request().open('GET', 'https://ansible.com/', headers=['bob'])
def test_Request_init_headers_not_dict(urlopen_mock, install_opener_mock):
with pytest.raises(ValueError):
Request(headers=['bob'])
@pytest.mark.parametrize('method,kwargs', [
('get', {}),
('options', {}),
('head', {}),
('post', {'data': None}),
('put', {'data': None}),
('patch', {'data': None}),
('delete', {}),
])
def test_methods(method, kwargs, mocker):
expected = method.upper()
open_mock = mocker.patch('ansible.module_utils.urls.Request.open')
request = Request()
getattr(request, method)('https://ansible.com')
open_mock.assert_called_once_with(expected, 'https://ansible.com', **kwargs)
def test_open_url(urlopen_mock, install_opener_mock, mocker):
req_mock = mocker.patch('ansible.module_utils.urls.Request.open')
open_url('https://ansible.com/')
req_mock.assert_called_once_with('GET', 'https://ansible.com/', data=None, headers=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None,
force_basic_auth=False, follow_redirects='urllib2',
client_cert=None, client_key=None, cookies=None, use_gssapi=False,
unix_socket=None, ca_path=None, unredirected_headers=None, decompress=True,
ciphers=None, use_netrc=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:3.0.0
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- test: 5
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.7 py36
test: rhel/[email protected]
- name: RHEL 8.7 py39
test: rhel/[email protected]
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/commands/incidental/windows.sh
|
#!/usr/bin/env bash
set -o pipefail -eux
declare -a args
IFS='/:' read -ra args <<< "$1"
version="${args[1]}"
target="shippable/windows/incidental/"
stage="${S:-prod}"
provider="${P:-default}"
# python version to run full tests on while other versions run minimal tests
python_default="$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(constants.CONTROLLER_MIN_PYTHON_VERSION)')"
# version to test when only testing a single version
single_version=2012-R2
# shellcheck disable=SC2086
ansible-test windows-integration --list-targets -v ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} > /tmp/explain.txt 2>&1 || { cat /tmp/explain.txt && false; }
{ grep ' windows-integration: .* (targeted)$' /tmp/explain.txt || true; } > /tmp/windows.txt
if [ -s /tmp/windows.txt ] || [ "${CHANGED:+$CHANGED}" == "" ]; then
echo "Detected changes requiring integration tests specific to Windows:"
cat /tmp/windows.txt
echo "Running Windows integration tests for multiple versions concurrently."
platforms=(
--windows "${version}"
)
else
echo "No changes requiring integration tests specific to Windows were detected."
echo "Running Windows integration tests for a single version only: ${single_version}"
if [ "${version}" != "${single_version}" ]; then
echo "Skipping this job since it is for: ${version}"
exit 0
fi
platforms=(
--windows "${version}"
)
fi
# shellcheck disable=SC2086
ansible-test windows-integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
"${platforms[@]}" \
--docker default --python "${python_default}" \
--remote-terminate always --remote-stage "${stage}" --remote-provider "${provider}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/commands/windows.sh
|
#!/usr/bin/env bash
set -o pipefail -eux
declare -a args
IFS='/:' read -ra args <<< "$1"
version="${args[1]}"
group="${args[2]}"
target="shippable/windows/group${group}/"
stage="${S:-prod}"
provider="${P:-default}"
# python versions to test in order
IFS=' ' read -r -a python_versions <<< \
"$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(" ".join(constants.CONTROLLER_PYTHON_VERSIONS))')"
# python version to run full tests on while other versions run minimal tests
python_default="$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(constants.CONTROLLER_MIN_PYTHON_VERSION)')"
# version to test when only testing a single version
single_version=2012-R2
# shellcheck disable=SC2086
ansible-test windows-integration --list-targets -v ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} > /tmp/explain.txt 2>&1 || { cat /tmp/explain.txt && false; }
{ grep ' windows-integration: .* (targeted)$' /tmp/explain.txt || true; } > /tmp/windows.txt
if [ -s /tmp/windows.txt ] || [ "${CHANGED:+$CHANGED}" == "" ]; then
echo "Detected changes requiring integration tests specific to Windows:"
cat /tmp/windows.txt
echo "Running Windows integration tests for multiple versions concurrently."
platforms=(
--windows "${version}"
)
else
echo "No changes requiring integration tests specific to Windows were detected."
echo "Running Windows integration tests for a single version only: ${single_version}"
if [ "${version}" != "${single_version}" ]; then
echo "Skipping this job since it is for: ${version}"
exit 0
fi
platforms=(
--windows "${version}"
)
fi
for version in "${python_versions[@]}"; do
changed_all_target="all"
changed_all_mode="default"
if [ "${version}" == "${python_default}" ]; then
# smoketest tests
if [ "${CHANGED}" ]; then
# with change detection enabled run tests for anything changed
# use the smoketest tests for any change that triggers all tests
ci="${target}"
changed_all_target="shippable/windows/smoketest/"
if [ "${target}" == "shippable/windows/group1/" ]; then
# only run smoketest tests for group1
changed_all_mode="include"
else
# smoketest tests already covered by group1
changed_all_mode="exclude"
fi
else
# without change detection enabled run entire test group
ci="${target}"
fi
else
# only run minimal tests for group1
if [ "${target}" != "shippable/windows/group1/" ]; then continue; fi
# minimal tests for other python versions
ci="shippable/windows/minimal/"
fi
# terminate remote instances on the final python version tested
if [ "${version}" = "${python_versions[-1]}" ]; then
terminate="always"
else
terminate="never"
fi
# shellcheck disable=SC2086
ansible-test windows-integration --color -v --retry-on-error "${ci}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
"${platforms[@]}" --changed-all-target "${changed_all_target}" --changed-all-mode "${changed_all_mode}" \
--docker default --python "${version}" \
--remote-terminate "${terminate}" --remote-stage "${stage}" --remote-provider "${provider}"
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
changelogs/fragments/server2012-deprecation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
|
.. _developing_modules_general_windows:
**************************************
Windows module development walkthrough
**************************************
In this section, we will walk through developing, testing, and debugging an
Ansible Windows module.
Because Windows modules are written in Powershell and need to be run on a
Windows host, this guide differs from the usual development walkthrough guide.
What's covered in this section:
.. contents::
:local:
Windows environment setup
=========================
Unlike Python module development which can be run on the host that runs
Ansible, Windows modules need to be written and tested for Windows hosts.
While evaluation editions of Windows can be downloaded from
Microsoft, these images are usually not ready to be used by Ansible without
further modification. The easiest way to set up a Windows host so that it is
ready to by used by Ansible is to set up a virtual machine using Vagrant.
Vagrant can be used to download existing OS images called *boxes* that are then
deployed to a hypervisor like VirtualBox. These boxes can either be created and
stored offline or they can be downloaded from a central repository called
Vagrant Cloud.
This guide will use the Vagrant boxes created by the `packer-windoze <https://github.com/jborean93/packer-windoze>`_
repository which have also been uploaded to `Vagrant Cloud <https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=jborean93>`_.
To find out more info on how these images are created, please go to the GitHub
repo and look at the ``README`` file.
Before you can get started, the following programs must be installed (please consult the Vagrant and
VirtualBox documentation for installation instructions):
- Vagrant
- VirtualBox
Create a Windows server in a VM
===============================
To create a single Windows Server 2016 instance, run the following:
.. code-block:: shell
vagrant init jborean93/WindowsServer2016
vagrant up
This will download the Vagrant box from Vagrant Cloud and add it to the local
boxes on your host and then start up that instance in VirtualBox. When starting
for the first time, the Windows VM will run through the sysprep process and
then create a HTTP and HTTPS WinRM listener automatically. Vagrant will finish
its process once the listeners are online, after which the VM can be used by Ansible.
Create an Ansible inventory
===========================
The following Ansible inventory file can be used to connect to the newly
created Windows VM:
.. code-block:: ini
[windows]
WindowsServer ansible_host=127.0.0.1
[windows:vars]
ansible_user=vagrant
ansible_password=vagrant
ansible_port=55986
ansible_connection=winrm
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
.. note:: The port ``55986`` is automatically forwarded by Vagrant to the
Windows host that was created, if this conflicts with an existing local
port then Vagrant will automatically use another one at random and display
show that in the output.
The OS that is created is based on the image set. The following
images can be used:
- `jborean93/WindowsServer2012 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012>`_
- `jborean93/WindowsServer2012R2 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012R2>`_
- `jborean93/WindowsServer2016 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2016>`_
- `jborean93/WindowsServer2019 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2019>`_
- `jborean93/WindowsServer2022 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2022>`_
When the host is online, it can accessible by RDP on ``127.0.0.1:3389`` but the
port may differ depending if there was a conflict. To get rid of the host, run
``vagrant destroy --force`` and Vagrant will automatically remove the VM and
any other files associated with that VM.
While this is useful when testing modules on a single Windows instance, these
host won't work without modification with domain based modules. The Vagrantfile
at `ansible-windows <https://github.com/jborean93/ansible-windows/tree/master/vagrant>`_
can be used to create a test domain environment to be used in Ansible. This
repo contains three files which are used by both Ansible and Vagrant to create
multiple Windows hosts in a domain environment. These files are:
- ``Vagrantfile``: The Vagrant file that reads the inventory setup of ``inventory.yml`` and provisions the hosts that are required
- ``inventory.yml``: Contains the hosts that are required and other connection information such as IP addresses and forwarded ports
- ``main.yml``: Ansible playbook called by Vagrant to provision the domain controller and join the child hosts to the domain
By default, these files will create the following environment:
- A single domain controller running on Windows Server 2016
- Five child hosts for each major Windows Server version joined to that domain
- A domain with the DNS name ``domain.local``
- A local administrator account on each host with the username ``vagrant`` and password ``vagrant``
- A domain admin account ``[email protected]`` with the password ``VagrantPass1``
The domain name and accounts can be modified by changing the variables
``domain_*`` in the ``inventory.yml`` file if it is required. The inventory
file can also be modified to provision more or less servers by changing the
hosts that are defined under the ``domain_children`` key. The host variable
``ansible_host`` is the private IP that will be assigned to the VirtualBox host
only network adapter while ``vagrant_box`` is the box that will be used to
create the VM.
Provisioning the environment
============================
To provision the environment as is, run the following:
.. code-block:: shell
git clone https://github.com/jborean93/ansible-windows.git
cd vagrant
vagrant up
.. note:: Vagrant provisions each host sequentially so this can take some time
to complete. If any errors occur during the Ansible phase of setting up the
domain, run ``vagrant provision`` to rerun just that step.
Unlike setting up a single Windows instance with Vagrant, these hosts can also
be accessed using the IP address directly as well as through the forwarded
ports. It is easier to access it over the host only network adapter as the
normal protocol ports are used, for example RDP is still over ``3389``. In cases where
the host cannot be resolved using the host only network IP, the following
protocols can be access over ``127.0.0.1`` using these forwarded ports:
- ``RDP``: 295xx
- ``SSH``: 296xx
- ``WinRM HTTP``: 297xx
- ``WinRM HTTPS``: 298xx
- ``SMB``: 299xx
Replace ``xx`` with the entry number in the inventory file where the domain
controller started with ``00`` and is incremented from there. For example, in
the default ``inventory.yml`` file, WinRM over HTTPS for ``SERVER2012R2`` is
forwarded over port ``29804`` as it's the fourth entry in ``domain_children``.
Windows new module development
==============================
When creating a new module there are a few things to keep in mind:
- Module code is in Powershell (.ps1) files while the documentation is contained in Python (.py) files of the same name
- Avoid using ``Write-Host/Debug/Verbose/Error`` in the module and add what needs to be returned to the ``$module.Result`` variable
- To fail a module, call ``$module.FailJson("failure message here")``, an Exception or ErrorRecord can be set to the second argument for a more descriptive error message
- You can pass in the exception or ErrorRecord as a second argument to ``FailJson("failure", $_)`` to get a more detailed output
- Most new modules require check mode and integration tests before they are merged into the main Ansible codebase
- Avoid using try/catch statements over a large code block, rather use them for individual calls so the error message can be more descriptive
- Try and catch specific exceptions when using try/catch statements
- Avoid using PSCustomObjects unless necessary
- Look for common functions in ``./lib/ansible/module_utils/powershell/`` and use the code there instead of duplicating work. These can be imported by adding the line ``#Requires -Module *`` where * is the filename to import, and will be automatically included with the module code sent to the Windows target when run through Ansible
- As well as PowerShell module utils, C# module utils are stored in ``./lib/ansible/module_utils/csharp/`` and are automatically imported in a module execution if the line ``#AnsibleRequires -CSharpUtil *`` is present
- C# and PowerShell module utils achieve the same goal but C# allows a developer to implement low level tasks, such as calling the Win32 API, and can be faster in some cases
- Ensure the code runs under Powershell v3 and higher on Windows Server 2012 and higher; if higher minimum Powershell or OS versions are required, ensure the documentation reflects this clearly
- Ansible runs modules under strictmode version 2.0. Be sure to test with that enabled by putting ``Set-StrictMode -Version 2.0`` at the top of your dev script
- Favor native Powershell cmdlets over executable calls if possible
- Use the full cmdlet name instead of aliases, for example ``Remove-Item`` over ``rm``
- Use named parameters with cmdlets, for example ``Remove-Item -Path C:\temp`` over ``Remove-Item C:\temp``
A very basic Powershell module `win_environment <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_environment.ps1>`_ incorporates best practices for Powershell modules. It demonstrates how to implement check-mode and diff-support, and also shows a warning to the user when a specific condition is met.
A slightly more advanced module is `win_uri <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_uri.ps1>`_ which additionally shows how to use different parameter types (bool, str, int, list, dict, path) and a selection of choices for parameters, how to fail a module and how to handle exceptions.
As part of the new ``AnsibleModule`` wrapper, the input parameters are defined and validated based on an argument
spec. The following options can be set at the root level of the argument spec:
- ``mutually_exclusive``: A list of lists, where the inner list contains module options that cannot be set together
- ``no_log``: Stops the module from emitting any logs to the Windows Event log
- ``options``: A dictionary where the key is the module option and the value is the spec for that option
- ``required_by``: A dictionary where the option(s) specified by the value must be set if the option specified by the key is also set
- ``required_if``: A list of lists where the inner list contains 3 or 4 elements;
* The first element is the module option to check the value against
* The second element is the value of the option specified by the first element, if matched then the required if check is run
* The third element is a list of required module options when the above is matched
* An optional fourth element is a boolean that states whether all module options in the third elements are required (default: ``$false``) or only one (``$true``)
- ``required_one_of``: A list of lists, where the inner list contains module options where at least one must be set
- ``required_together``: A list of lists, where the inner list contains module options that must be set together
- ``supports_check_mode``: Whether the module supports check mode, by default this is ``$false``
The actual input options for a module are set within the ``options`` value as a dictionary. The keys of this dictionary
are the module option names while the values are the spec of that module option. Each spec can have the following
options set:
- ``aliases``: A list of aliases for the module option
- ``choices``: A list of valid values for the module option, if ``type=list`` then each list value is validated against the choices and not the list itself
- ``default``: The default value for the module option if not set
- ``deprecated_aliases``: A list of hashtables that define aliases that are deprecated and the versions they will be removed in. Each entry must contain the keys ``name`` and ``collection_name`` with either ``version`` or ``date``
- ``elements``: When ``type=list``, this sets the type of each list value, the values are the same as ``type``
- ``no_log``: Will sanitise the input value before being returned in the ``module_invocation`` return value
- ``removed_in_version``: States when a deprecated module option is to be removed, a warning is displayed to the end user if set
- ``removed_at_date``: States the date (YYYY-MM-DD) when a deprecated module option will be removed, a warning is displayed to the end user if set
- ``removed_from_collection``: States from which collection the deprecated module option will be removed; must be specified if one of ``removed_in_version`` and ``removed_at_date`` is specified
- ``required``: Will fail when the module option is not set
- ``type``: The type of the module option, if not set then it defaults to ``str``. The valid types are;
* ``bool``: A boolean value
* ``dict``: A dictionary value, if the input is a JSON or key=value string then it is converted to dictionary
* ``float``: A float or `Single <https://docs.microsoft.com/en-us/dotnet/api/system.single?view=netframework-4.7.2>`_ value
* ``int``: An Int32 value
* ``json``: A string where the value is converted to a JSON string if the input is a dictionary
* ``list``: A list of values, ``elements=<type>`` can convert the individual list value types if set. If ``elements=dict`` then ``options`` is defined, the values will be validated against the argument spec. When the input is a string then the string is split by ``,`` and any whitespace is trimmed
* ``path``: A string where values likes ``%TEMP%`` are expanded based on environment values. If the input value starts with ``\\?\`` then no expansion is run
* ``raw``: No conversions occur on the value passed in by Ansible
* ``sid``: Will convert Windows security identifier values or Windows account names to a `SecurityIdentifier <https://docs.microsoft.com/en-us/dotnet/api/system.security.principal.securityidentifier?view=netframework-4.7.2>`_ value
* ``str``: The value is converted to a string
When ``type=dict``, or ``type=list`` and ``elements=dict``, the following keys can also be set for that module option:
- ``apply_defaults``: The value is based on the ``options`` spec defaults for that key if ``True`` and null if ``False``. Only valid when the module option is not defined by the user and ``type=dict``.
- ``mutually_exclusive``: Same as the root level ``mutually_exclusive`` but validated against the values in the sub dict
- ``options``: Same as the root level ``options`` but contains the valid options for the sub option
- ``required_if``: Same as the root level ``required_if`` but validated against the values in the sub dict
- ``required_by``: Same as the root level ``required_by`` but validated against the values in the sub dict
- ``required_together``: Same as the root level ``required_together`` but validated against the values in the sub dict
- ``required_one_of``: Same as the root level ``required_one_of`` but validated against the values in the sub dict
A module type can also be a delegate function that converts the value to whatever is required by the module option. For
example the following snippet shows how to create a custom type that creates a ``UInt64`` value:
.. code-block:: powershell
$spec = @{
uint64_type = @{ type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) } }
}
$uint64_type = $module.Params.uint64_type
When in doubt, look at some of the other core modules and see how things have been
implemented there.
Sometimes there are multiple ways that Windows offers to complete a task; this
is the order to favor when writing modules:
- Native Powershell cmdlets like ``Remove-Item -Path C:\temp -Recurse``
- .NET classes like ``[System.IO.Path]::GetRandomFileName()``
- WMI objects through the ``New-CimInstance`` cmdlet
- COM objects through ``New-Object -ComObject`` cmdlet
- Calls to native executables like ``Secedit.exe``
PowerShell modules support a small subset of the ``#Requires`` options built
into PowerShell as well as some Ansible-specific requirements specified by
``#AnsibleRequires``. These statements can be placed at any point in the script,
but are most commonly near the top. They are used to make it easier to state the
requirements of the module without writing any of the checks. Each ``requires``
statement must be on its own line, but there can be multiple requires statements
in one script.
These are the checks that can be used within Ansible modules:
- ``#Requires -Module Ansible.ModuleUtils.<module_util>``: Added in Ansible 2.4, specifies a module_util to load in for the module execution.
- ``#Requires -Version x.y``: Added in Ansible 2.5, specifies the version of PowerShell that is required by the module. The module will fail if this requirement is not met.
- ``#AnsibleRequires -PowerShell <module_util>``: Added in Ansible 2.8, like ``#Requires -Module``, this specifies a module_util to load in for module execution.
- ``#AnsibleRequires -CSharpUtil <module_util>``: Added in Ansible 2.8, specifies a C# module_util to load in for the module execution.
- ``#AnsibleRequires -OSVersion x.y``: Added in Ansible 2.5, specifies the OS build version that is required by the module and will fail if this requirement is not met. The actual OS version is derived from ``[Environment]::OSVersion.Version``.
- ``#AnsibleRequires -Become``: Added in Ansible 2.5, forces the exec runner to run the module with ``become``, which is primarily used to bypass WinRM restrictions. If ``ansible_become_user`` is not specified then the ``SYSTEM`` account is used instead.
The ``#AnsibleRequires -PowerShell`` and ``#AnsibleRequires -CSharpUtil``
support further features such as:
- Importing a util contained in a collection (added in Ansible 2.9)
- Importing a util by relative names (added in Ansible 2.10)
- Specifying the util is optional by adding `-Optional` to the import
declaration (added in Ansible 2.12).
See the below examples for more details:
.. code-block:: powershell
# Imports the PowerShell Ansible.ModuleUtils.Legacy provided by Ansible itself
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Legacy
# Imports the PowerShell my_util in the my_namesapce.my_name collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the PowerShell my_util that exists in the same collection as the current module
#AnsibleRequires -PowerShell ..module_utils.my_util
# Imports the PowerShell Ansible.ModuleUtils.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Optional -Optional
# Imports the C# Ansible.Process provided by Ansible itself
#AnsibleRequires -CSharpUtil Ansible.Process
# Imports the C# my_util in the my_namespace.my_name collection
#AnsibleRequires -CSharpUtil ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the C# my_util that exists in the same collection as the current module
#AnsibleRequires -CSharpUtil ..module_utils.my_util
# Imports the C# Ansible.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -CSharpUtil Ansible.Optional -Optional
For optional require statements, it is up to the module code to then verify
whether the util has been imported before trying to use it. This can be done by
checking if a function or type provided by the util exists or not.
While both ``#Requires -Module`` and ``#AnsibleRequires -PowerShell`` can be
used to load a PowerShell module it is recommended to use ``#AnsibleRequires``.
This is because ``#AnsibleRequires`` supports collection module utils, imports
by relative util names, and optional util imports.
C# module utils can reference other C# utils by adding the line
``using Ansible.<module_util>;`` to the top of the script with all the other
using statements.
Windows module utilities
========================
Like Python modules, PowerShell modules also provide a number of module
utilities that provide helper functions within PowerShell. These module_utils
can be imported by adding the following line to a PowerShell module:
.. code-block:: powershell
#Requires -Module Ansible.ModuleUtils.Legacy
This will import the module_util at ``./lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1``
and enable calling all of its functions. As of Ansible 2.8, Windows module
utils can also be written in C# and stored at ``lib/ansible/module_utils/csharp``.
These module_utils can be imported by adding the following line to a PowerShell
module:
.. code-block:: powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
This will import the module_util at ``./lib/ansible/module_utils/csharp/Ansible.Basic.cs``
and automatically load the types in the executing process. C# module utils can
reference each other and be loaded together by adding the following line to the
using statements at the top of the util:
.. code-block:: csharp
using Ansible.Become;
There are special comments that can be set in a C# file for controlling the
compilation parameters. The following comments can be added to the script;
- ``//AssemblyReference -Name <assembly dll> [-CLR [Core|Framework]]``: The assembly DLL to reference during compilation, the optional ``-CLR`` flag can also be used to state whether to reference when running under .NET Core, Framework, or both (if omitted)
- ``//NoWarn -Name <error id> [-CLR [Core|Framework]]``: A compiler warning ID to ignore when compiling the code, the optional ``-CLR`` works the same as above. A list of warnings can be found at `Compiler errors <https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/index>`_
As well as this, the following pre-processor symbols are defined;
- ``CORECLR``: This symbol is present when PowerShell is running through .NET Core
- ``WINDOWS``: This symbol is present when PowerShell is running on Windows
- ``UNIX``: This symbol is present when PowerShell is running on Unix
A combination of these flags help to make a module util interoperable on both
.NET Framework and .NET Core, here is an example of them in action:
.. code-block:: csharp
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
//AssemblyReference -Name Newtonsoft.Json.dll -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
// Ignore error CS1702 for all .NET types
//NoWarn -Name CS1702
// Ignore error CS1956 only for .NET Framework
//NoWarn -Name CS1956 -CLR Framework
The following is a list of module_utils that are packaged with Ansible and a general description of what
they do:
- ArgvParser: Utility used to convert a list of arguments to an escaped string compliant with the Windows argument parsing rules.
- CamelConversion: Utility used to convert camelCase strings/lists/dicts to snake_case.
- CommandUtil: Utility used to execute a Windows process and return the stdout/stderr and rc as separate objects.
- FileUtil: Utility that expands on the ``Get-ChildItem`` and ``Test-Path`` to work with special files like ``C:\pagefile.sys``.
- Legacy: General definitions and helper utilities for Ansible module.
- LinkUtil: Utility to create, remove, and get information about symbolic links, junction points and hard inks.
- SID: Utilities used to convert a user or group to a Windows SID and vice versa.
For more details on any specific module utility and their requirements, please see the `Ansible
module utilities source code <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils/powershell>`_.
PowerShell module utilities can be stored outside of the standard Ansible
distribution for use with custom modules. Custom module_utils are placed in a
folder called ``module_utils`` located in the root folder of the playbook or role
directory.
C# module utilities can also be stored outside of the standard Ansible distribution for use with custom modules. Like
PowerShell utils, these are stored in a folder called ``module_utils`` and the filename must end in the extension
``.cs``, start with ``Ansible.`` and be named after the namespace defined in the util.
The below example is a role structure that contains two PowerShell custom module_utils called
``Ansible.ModuleUtils.ModuleUtil1``, ``Ansible.ModuleUtils.ModuleUtil2``, and a C# util containing the namespace
``Ansible.CustomUtil``:
.. code-block:: console
meta/
main.yml
defaults/
main.yml
module_utils/
Ansible.ModuleUtils.ModuleUtil1.psm1
Ansible.ModuleUtils.ModuleUtil2.psm1
Ansible.CustomUtil.cs
tasks/
main.yml
Each PowerShell module_util must contain at least one function that has been exported with ``Export-ModuleMember``
at the end of the file. For example
.. code-block:: powershell
Export-ModuleMember -Function Invoke-CustomUtil, Get-CustomInfo
Exposing shared module options
++++++++++++++++++++++++++++++
PowerShell module utils can easily expose common module options that a module can use when building its argument spec.
This allows common features to be stored and maintained in one location and have those features used by multiple
modules with minimal effort. Any new features or bugfixes added to one of these utils are then automatically used by
the various modules that call that util.
An example of this would be to have a module util that handles authentication and communication against an API This
util can be used by multiple modules to expose a common set of module options like the API endpoint, username,
password, timeout, cert validation, and so on without having to add those options to each module spec.
The standard convention for a module util that has a shared argument spec would have
- A ``Get-<namespace.name.util name>Spec`` function that outputs the common spec for a module
* It is highly recommended to make this function name be unique to the module to avoid any conflicts with other utils that can be loaded
* The format of the output spec is a Hashtable in the same format as the ``$spec`` used for normal modules
- A function that takes in an ``AnsibleModule`` object called under the ``-Module`` parameter which it can use to get the shared options
Because these options can be shared across various module it is highly recommended to keep the module option names and
aliases in the shared spec as specific as they can be. For example do not have a util option called ``password``,
rather you should prefix it with a unique name like ``acme_password``.
.. warning::
Failure to have a unique option name or alias can prevent the util being used by module that also use those names or
aliases for its own options.
The following is an example module util called ``ServiceAuth.psm1`` in a collection that implements a common way for
modules to authentication with a service.
.. code-block:: powershell
Invoke-MyServiceResource {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[String]
$ResourceId,
[String]
$State = 'present'
)
# Process the common module options known to the util
$params = @{
ServerUri = $Module.Params.my_service_url
}
if ($Module.Params.my_service_username) {
$params.Credential = Get-MyServiceCredential
}
if ($State -eq 'absent') {
Remove-MyService @params -ResourceId $ResourceId
} else {
New-MyService @params -ResourceId $ResourceId
}
}
Get-MyNamespaceMyCollectionServiceAuthSpec {
# Output the util spec
@{
options = @{
my_service_url = @{ type = 'str'; required = $true }
my_service_username = @{ type = 'str' }
my_service_password = @{ type = 'str'; no_log = $true }
}
required_together = @(
,@('my_service_username', 'my_service_password')
)
}
}
$exportMembers = @{
Function = 'Get-MyNamespaceMyCollectionServiceAuthSpec', 'Invoke-MyServiceResource'
}
Export-ModuleMember @exportMembers
For a module to take advantage of this common argument spec it can be set out like
.. code-block:: powershell
#!powershell
# Include the module util ServiceAuth.psm1 from the my_namespace.my_collection collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_collection.plugins.module_utils.ServiceAuth
# Create the module spec like normal
$spec = @{
options = @{
resource_id = @{ type = 'str'; required = $true }
state = @{ type = 'str'; choices = 'absent', 'present' }
}
}
# Create the module from the module spec but also include the util spec to merge into our own.
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-MyNamespaceMyCollectionServiceAuthSpec))
# Call the ServiceAuth module util and pass in the module object so it can access the module options.
Invoke-MyServiceResource -Module $module -ResourceId $module.Params.resource_id -State $module.params.state
$module.ExitJson()
.. note::
Options defined in the module spec will always have precedence over a util spec. Any list values under the same key
in a util spec will be appended to the module spec for that same key. Dictionary values will add any keys that are
missing from the module spec and merge any values that are lists or dictionaries. This is similar to how the doc
fragment plugins work when extending module documentation.
To document these shared util options for a module, create a doc fragment plugin that documents the options implemented
by the module util and extend the module docs for every module that implements the util to include that fragment in
its docs.
Windows playbook module testing
===============================
You can test a module with an Ansible playbook. For example:
- Create a playbook in any directory ``touch testmodule.yml``.
- Create an inventory file in the same directory ``touch hosts``.
- Populate the inventory file with the variables required to connect to a Windows host(s).
- Add the following to the new playbook file:
.. code-block:: yaml
---
- name: test out windows module
hosts: windows
tasks:
- name: test out module
win_module:
name: test name
- Run the playbook ``ansible-playbook -i hosts testmodule.yml``
This can be useful for seeing how Ansible runs with
the new module end to end. Other possible ways to test the module are
shown below.
Windows debugging
=================
Debugging a module currently can only be done on a Windows host. This can be
useful when developing a new module or implementing bug fixes. These
are some steps that need to be followed to set this up:
- Copy the module script to the Windows server
- Copy the folders ``./lib/ansible/module_utils/powershell`` and ``./lib/ansible/module_utils/csharp`` to the same directory as the script above
- Add an extra ``#`` to the start of any ``#Requires -Module`` lines in the module code, this is only required for any lines starting with ``#Requires -Module``
- Add the following to the start of the module script that was copied to the server:
.. code-block:: powershell
# Set $ErrorActionPreference to what's set during Ansible execution
$ErrorActionPreference = "Stop"
# Set the first argument as the path to a JSON file that contains the module args
$args = @("$($pwd.Path)\args.json")
# Or instead of an args file, set $complex_args to the pre-processed module args
$complex_args = @{
_ansible_check_mode = $false
_ansible_diff = $false
path = "C:\temp"
state = "present"
}
# Import any C# utils referenced with '#AnsibleRequires -CSharpUtil' or 'using Ansible.;
# The $_csharp_utils entries should be the context of the C# util files and not the path
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.AddType.psm1"
$_csharp_utils = @(
[System.IO.File]::ReadAllText("$($pwd.Path)\csharp\Ansible.Basic.cs")
)
Add-CSharpType -References $_csharp_utils -IncludeDebugInfo
# Import any PowerShell modules referenced with '#Requires -Module`
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.Legacy.psm1"
# End of the setup code and start of the module code
#!powershell
You can add more args to ``$complex_args`` as required by the module or define the module options through a JSON file
with the structure:
.. code-block:: json
{
"ANSIBLE_MODULE_ARGS": {
"_ansible_check_mode": false,
"_ansible_diff": false,
"path": "C:\\temp",
"state": "present"
}
}
There are multiple IDEs that can be used to debug a Powershell script, two of
the most popular ones are
- `Powershell ISE`_
- `Visual Studio Code`_
.. _Powershell ISE: https://docs.microsoft.com/en-us/powershell/scripting/core-powershell/ise/how-to-debug-scripts-in-windows-powershell-ise
.. _Visual Studio Code: https://blogs.technet.microsoft.com/heyscriptingguy/2017/02/06/debugging-powershell-script-in-visual-studio-code-part-1/
To be able to view the arguments as passed by Ansible to the module follow
these steps.
- Prefix the Ansible command with :envvar:`ANSIBLE_KEEP_REMOTE_FILES=1<ANSIBLE_KEEP_REMOTE_FILES>` to specify that Ansible should keep the exec files on the server.
- Log onto the Windows server using the same user account that Ansible used to execute the module.
- Navigate to ``%TEMP%\..``. It should contain a folder starting with ``ansible-tmp-``.
- Inside this folder, open the PowerShell script for the module.
- In this script is a raw JSON script under ``$json_raw`` which contains the module arguments under ``module_args``. These args can be assigned manually to the ``$complex_args`` variable that is defined on your debug script or put in the ``args.json`` file.
Windows unit testing
====================
Currently there is no mechanism to run unit tests for Powershell modules under Ansible CI.
Windows integration testing
===========================
Integration tests for Ansible modules are typically written as Ansible roles. These test
roles are located in ``./test/integration/targets``. You must first set up your testing
environment, and configure a test inventory for Ansible to connect to.
In this example we will set up a test inventory to connect to two hosts and run the integration
tests for win_stat:
- Run the command ``source ./hacking/env-setup`` to prepare environment.
- Create a copy of ``./test/integration/inventory.winrm.template`` and name it ``inventory.winrm``.
- Fill in entries under ``[windows]`` and set the required variables that are needed to connect to the host.
- :ref:`Install the required Python modules <windows_winrm>` to support WinRM and a configured authentication method.
- To execute the integration tests, run ``ansible-test windows-integration win_stat``; you can replace ``win_stat`` with the role you want to test.
This will execute all the tests currently defined for that role. You can set
the verbosity level using the ``-v`` argument just as you would with
ansible-playbook.
When developing tests for a new module, it is recommended to test a scenario once in
check mode and twice not in check mode. This ensures that check mode
does not make any changes but reports a change, as well as that the second run is
idempotent and does not report changes. For example:
.. code-block:: yaml
- name: remove a file (check mode)
win_file:
path: C:\temp
state: absent
register: remove_file_check
check_mode: true
- name: get result of remove a file (check mode)
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual_check
- name: assert remove a file (check mode)
assert:
that:
- remove_file_check is changed
- remove_file_actual_check.stdout == 'true\r\n'
- name: remove a file
win_file:
path: C:\temp
state: absent
register: remove_file
- name: get result of remove a file
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual
- name: assert remove a file
assert:
that:
- remove_file is changed
- remove_file_actual.stdout == 'false\r\n'
- name: remove a file (idempotent)
win_file:
path: C:\temp
state: absent
register: remove_file_again
- name: assert remove a file (idempotent)
assert:
that:
- not remove_file_again is changed
Windows communication and development support
=============================================
Join the ``#ansible-devel`` or ``#ansible-windows`` chat channels (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) for discussions about Ansible development for Windows.
For questions and discussions pertaining to using the Ansible product,
use the ``#ansible`` channel.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/dev_guide/testing/sanity/integration-aliases.rst
|
integration-aliases
===================
Integration tests are executed by ``ansible-test`` and reside in directories under ``test/integration/targets/``.
Each test MUST have an ``aliases`` file to control test execution.
Aliases are explained in the following sections. Each alias must be on a separate line in an ``aliases`` file.
Groups
------
Tests must be configured to run in exactly one group. This is done by adding the appropriate group to the ``aliases`` file.
The following are examples of some of the available groups:
- ``shippable/posix/group1``
- ``shippable/windows/group2``
- ``shippable/azure/group3``
- ``shippable/aws/group1``
- ``shippable/cloud/group1``
Groups are used to balance tests across multiple CI jobs to minimize test run time.
They also improve efficiency by keeping tests with similar requirements running together.
When selecting a group for a new test, use the same group as existing tests similar to the one being added.
If more than one group is available, select one randomly.
Setup
-----
Aliases can be used to execute setup targets before running tests:
- ``setup/once/TARGET`` - Run the target ``TARGET`` before the first target that requires it.
- ``setup/always/TARGET`` - Run the target ``TARGET`` before each target that requires it.
Requirements
------------
Aliases can be used to express some test requirements:
- ``needs/privileged`` - Requires ``--docker-privileged`` when running tests with ``--docker``.
- ``needs/root`` - Requires running tests as ``root`` or with ``--docker``.
- ``needs/ssh`` - Requires SSH connections to localhost (or the test container with ``--docker``) without a password.
- ``needs/httptester`` - Requires use of the http-test-container to run tests.
Dependencies
------------
Some test dependencies are automatically discovered:
- Ansible role dependencies defined in ``meta/main.yml`` files.
- Setup targets defined with ``setup/*`` aliases.
- Symbolic links from one target to a file in another target.
Aliases can be used to declare dependencies that are not handled automatically:
- ``needs/target/TARGET`` - Requires use of the test target ``TARGET``.
- ``needs/file/PATH`` - Requires use of the file ``PATH`` relative to the git root.
Skipping
--------
Aliases can be used to skip platforms using one of the following:
- ``skip/freebsd`` - Skip tests on FreeBSD.
- ``skip/macos`` - Skip tests on macOS.
- ``skip/rhel`` - Skip tests on RHEL.
- ``skip/docker`` - Skip tests when running in a Docker container.
Platform versions, as specified using the ``--remote`` option with ``/`` removed, can also be skipped:
- ``skip/freebsd11.1`` - Skip tests on FreeBSD 11.1.
- ``skip/rhel7.6`` - Skip tests on RHEL 7.6.
Windows versions, as specified using the ``--windows`` option can also be skipped:
- ``skip/windows/2012`` - Skip tests on Windows Server 2012.
- ``skip/windows/2012-R2`` - Skip tests on Windows Server 2012 R2.
Aliases can be used to skip Python major versions using one of the following:
- ``skip/python2`` - Skip tests on Python 2.x.
- ``skip/python3`` - Skip tests on Python 3.x.
For more fine grained skipping, use conditionals in integration test playbooks, such as:
.. code-block:: yaml
when: ansible_distribution in ('Ubuntu')
Miscellaneous
-------------
There are several other aliases available as well:
- ``destructive`` - Requires ``--allow-destructive`` to run without ``--docker`` or ``--remote``.
- ``hidden`` - Target is ignored. Usable as a dependency. Automatic for ``setup_`` and ``prepare_`` prefixed targets.
- ``retry/never`` - Target is excluded from retries enabled by the ``--retry-on-error`` option.
Unstable
--------
Tests which fail sometimes should be marked with the ``unstable`` alias until the instability has been fixed.
These tests will continue to run for pull requests which modify the test or the module under test.
This avoids unnecessary test failures for other pull requests, as well as tests on merge runs and nightly CI jobs.
There are two ways to run unstable tests manually:
- Use the ``--allow-unstable`` option for ``ansible-test``
- Prefix the test name with ``unstable/`` when passing it to ``ansible-test``.
Tests will be marked as unstable by a member of the Ansible Core Team.
GitHub issues_ will be created to track each unstable test.
Disabled
--------
Tests which always fail should be marked with the ``disabled`` alias until they can be fixed.
Disabled tests are automatically skipped.
There are two ways to run disabled tests manually:
- Use the ``--allow-disabled`` option for ``ansible-test``
- Prefix the test name with ``disabled/`` when passing it to ``ansible-test``.
Tests will be marked as disabled by a member of the Ansible Core Team.
GitHub issues_ will be created to track each disabled test.
Unsupported
-----------
Tests which cannot be run in CI should be marked with the ``unsupported`` alias.
Most tests can be supported through the use of simulators and/or cloud plugins.
However, if that is not possible then marking a test as unsupported will prevent it from running in CI.
There are two ways to run unsupported tests manually:
* Use the ``--allow-unsupported`` option for ``ansible-test``
* Prefix the test name with ``unsupported/`` when passing it to ``ansible-test``.
Tests will be marked as unsupported by the contributor of the test.
Cloud
-----
Tests for cloud services and other modules that require access to external APIs usually require special support for testing in CI.
These require an additional alias to indicate the required test plugin.
Some of the available aliases are:
- ``cloud/aws``
- ``cloud/azure``
- ``cloud/cs``
- ``cloud/digitalocean``
- ``cloud/openshift``
- ``cloud/vcenter``
Untested
--------
Every module and plugin should have integration tests, even if the tests cannot be run in CI.
Issues
------
Tests that are marked as unstable_ or disabled_ will have an issue created to track the status of the test.
Each issue will be assigned to one of the following projects:
- `AWS <https://github.com/ansible/ansible/projects/21>`_
- `Azure <https://github.com/ansible/ansible/projects/22>`_
- `Windows <https://github.com/ansible/ansible/projects/23>`_
- `General <https://github.com/ansible/ansible/projects/25>`_
Questions
---------
For questions about integration tests reach out to @mattclay or @gundalow on GitHub or the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/os_guide/windows_faq.rst
|
.. _windows_faq:
Windows Frequently Asked Questions
==================================
Here are some commonly asked questions in regards to Ansible and Windows and
their answers.
.. note:: This document covers questions about managing Microsoft Windows servers with Ansible.
For questions about Ansible Core, please see the
:ref:`general FAQ page <ansible_faq>`.
Does Ansible work with Windows XP or Server 2003?
``````````````````````````````````````````````````
Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
* Windows Server 2008 :sup:`1`
* Windows Server 2008 R2 :sup:`1`
* Windows Server 2012
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
* Windows 7 :sup:`1`
* Windows 8.1
* Windows 10
1 - See the :ref:`Server 2008 FAQ <windows_faq_server2008>` entry for more details.
Ansible also has minimum PowerShell version requirements - please see
:ref:`windows_setup` for the latest information.
.. _windows_faq_server2008:
Are Server 2008, 2008 R2 and Windows 7 supported?
`````````````````````````````````````````````````
Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
Can I manage Windows Nano Server with Ansible?
``````````````````````````````````````````````
Ansible does not currently work with Windows Nano Server, since it does
not have access to the full .NET Framework that is used by the majority of the
modules and internal components.
.. _windows_faq_ansible:
Can Ansible run on Windows?
```````````````````````````
No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host
natively, though it can run under the Windows Subsystem for Linux (WSL).
.. note:: The Windows Subsystem for Linux is not supported by Ansible and
should not be used for production systems.
To install Ansible on WSL, the following commands
can be run in the bash terminal:
.. code-block:: shell
sudo apt-get update
sudo apt-get install python3-pip git libffi-dev libssl-dev -y
pip install --user ansible pywinrm
To run Ansible from source instead of a release on the WSL, simply uninstall the pip
installed version and then clone the git repo.
.. code-block:: shell
pip uninstall ansible -y
git clone https://github.com/ansible/ansible.git
source ansible/hacking/env-setup
# To enable Ansible on login, run the following
echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
If you encounter timeout errors when running Ansible on the WSL, this may be due to an issue
with ``sleep`` not returning correctly. The following workaround may resolve the issue:
.. code-block:: shell
mv /usr/bin/sleep /usr/bin/sleep.orig
ln -s /bin/true /usr/bin/sleep
Another option is to use WSL 2 if running Windows 10 later than build 2004.
.. code-block:: shell
wsl --set-default-version 2
Can I use SSH keys to authenticate to Windows hosts?
````````````````````````````````````````````````````
You cannot use SSH keys with the WinRM or PSRP connection plugins.
These connection plugins use X509 certificates for authentication instead
of the SSH key pairs that SSH uses.
The way X509 certificates are generated and mapped to a user is different
from the SSH implementation; consult the :ref:`windows_winrm` documentation for
more information.
Ansible 2.8 has added an experimental option to use the SSH connection plugin,
which uses SSH keys for authentication, for Windows servers. See :ref:`this question <windows_faq_ssh>`
for more information.
.. _windows_faq_winrm:
Why can I run a command locally that does not work under Ansible?
`````````````````````````````````````````````````````````````````
Ansible executes commands through WinRM. These processes are different from
running a command locally in these ways:
* Unless using an authentication option like CredSSP or Kerberos with
credential delegation, the WinRM process does not have the ability to
delegate the user's credentials to a network resource, causing ``Access is
Denied`` errors.
* All processes run under WinRM are in a non-interactive session. Applications
that require an interactive session will not work.
* When running through WinRM, Windows restricts access to internal Windows
APIs like the Windows Update API and DPAPI, which some installers and
programs rely on.
Some ways to bypass these restrictions are to:
* Use ``become``, which runs a command as it would when run locally. This will
bypass most WinRM restrictions, as Windows is unaware the process is running
under WinRM when ``become`` is used. See the :ref:`become` documentation for more
information.
* Use a scheduled task, which can be created with ``win_scheduled_task``. Like
``become``, it will bypass all WinRM restrictions, but it can only be used to run
commands, not modules.
* Use ``win_psexec`` to run a command on the host. PSExec does not use WinRM
and so will bypass any of the restrictions.
* To access network resources without any of these workarounds, you can use
CredSSP or Kerberos with credential delegation enabled.
See :ref:`become` more info on how to use become. The limitations section at
:ref:`windows_winrm` has more details around WinRM limitations.
This program won't install on Windows with Ansible
``````````````````````````````````````````````````
See :ref:`this question <windows_faq_winrm>` for more information about WinRM limitations.
What Windows modules are available?
```````````````````````````````````
Most of the Ansible modules in Ansible Core are written for a combination of
Linux/Unix machines and arbitrary web services. These modules are written in
Python and most of them do not work on Windows.
Because of this, there are dedicated Windows modules that are written in
PowerShell and are meant to be run on Windows hosts. A list of these modules
can be found :ref:`here <windows_modules>`.
In addition, the following Ansible Core modules/action-plugins work with Windows:
* add_host
* assert
* async_status
* debug
* fail
* fetch
* group_by
* include
* include_role
* include_vars
* meta
* pause
* raw
* script
* set_fact
* set_stats
* setup
* slurp
* template (also: win_template)
* wait_for_connection
Ansible Windows modules exist in the :ref:`plugins_in_ansible.windows`, :ref:`plugins_in_community.windows`, and :ref:`plugins_in_chocolatey.chocolatey` collections.
Can I run Python modules on Windows hosts?
``````````````````````````````````````````
No, the WinRM connection protocol is set to use PowerShell modules, so Python
modules will not work. A way to bypass this issue to use
``delegate_to: localhost`` to run a Python module on the Ansible controller.
This is useful if during a playbook, an external service needs to be contacted
and there is no equivalent Windows module available.
.. _windows_faq_ssh:
Can I connect to Windows hosts over SSH?
````````````````````````````````````````
Ansible 2.8 has added an experimental option to use the SSH connection plugin
to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
fork that is in development with Microsoft on
the Windows host(s). While most of the basics should work with SSH,
``Win32-OpenSSH`` is rapidly changing, with new features added and bugs
fixed in every release. It is highly recommend you `install <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ the latest release
of ``Win32-OpenSSH`` from the GitHub Releases page when using it with Ansible
on Windows hosts.
To use SSH as the connection to a Windows host, set the following variables in
the inventory:
.. code-block:: shell
ansible_connection=ssh
# Set either cmd or powershell not both
ansible_shell_type=cmd
# ansible_shell_type=powershell
The value for ``ansible_shell_type`` should either be ``cmd`` or ``powershell``.
Use ``cmd`` if the ``DefaultShell`` has not been configured on the SSH service
and ``powershell`` if that has been set as the ``DefaultShell``.
Why is connecting to a Windows host through SSH failing?
````````````````````````````````````````````````````````
Unless you are using ``Win32-OpenSSH`` as described above, you must connect to
Windows hosts using :ref:`windows_winrm`. If your Ansible output indicates that
SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
Make sure ``ansible_connection: winrm`` is set in the inventory for the Windows
host(s).
Why are my credentials being rejected?
``````````````````````````````````````
This can be due to a myriad of reasons unrelated to incorrect credentials.
See HTTP 401/Credentials Rejected at :ref:`windows_setup` for a more detailed
guide of this could mean.
Why am I getting an error SSL CERTIFICATE_VERIFY_FAILED?
````````````````````````````````````````````````````````
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that
has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to
validate the certificate WinRM is using for an HTTPS connection. If the
certificate cannot be validated (such as in the case of a self signed cert), it will
fail the verification process.
To ignore certificate validation, add
``ansible_winrm_server_cert_validation: ignore`` to inventory for the Windows
host.
.. seealso::
:ref:`windows`
The Windows documentation index
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/os_guide/windows_setup.rst
|
.. _windows_setup:
Setting up a Windows Host
=========================
This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host.
.. contents::
:local:
Host Requirements
`````````````````
For Ansible to communicate to a Windows host and use Windows modules, the
Windows host must meet these base requirements for connectivity:
* With Ansible you can generally manage Windows versions under the current and extended support from Microsoft. You can also manage desktop OSs including Windows 8.1, and 10, and server OSs including Windows Server 2012, 2012 R2, 2016, 2019, and 2022.
* You need to install PowerShell 3.0 or newer and at least .NET 4.0 on the Windows host.
* You need to create and activate a WinRM listener. More details, see `WinRM Setup <https://docs.ansible.com/ansible/latest//user_guide/windows_setup.html#winrm-listener>`_.
.. Note:: Some Ansible modules have additional requirements, such as a newer OS or PowerShell version. Consult the module documentation page to determine whether a host meets those requirements.
Upgrading PowerShell and .NET Framework
---------------------------------------
Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this
requirement. You can use the `Upgrade-PowerShell.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1>`_ script to update these.
This is an example of how to run this script from PowerShell:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1"
$file = "$env:temp\Upgrade-PowerShell.ps1"
$username = "Administrator"
$password = "Password"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
&$file -Version 5.1 -Username $username -Password $password -Verbose
In the script, the ``file`` value can be the PowerShell version 3.0, 4.0, or 5.1.
Once completed, you need to run the following PowerShell commands:
1. As an optional but good security practice, you can set the execution policy back to the default.
.. code-block:: powershell
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
Use the ``RemoteSigned`` value for Windows servers, or ``Restricted`` for Windows clients.
2. Remove the auto logon.
.. code-block:: powershell
$reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue
The script determines what programs you need to install (such as .NET Framework 4.5.2) and what PowerShell version needs to be present. If a reboot is needed and the ``username`` and ``password`` parameters are set, the script will automatically reboot the machine and then logon. If the ``username`` and ``password`` parameters are not set, the script will prompt the user to manually reboot and logon when required. When the user is next logged in, the script will continue where it left off and the process continues until no more
actions are required.
.. Note:: If you run the script on Server 2008, then you need to install SP2. For Server 2008 R2 or Windows 7 you need SP1.
On Windows Server 2008 you can install only PowerShell 3.0. A newer version will result in the script failure.
The ``username`` and ``password`` parameters are stored in plain text in the registry. Run the cleanup commands after the script finishes to ensure no credentials are stored on the host.
WinRM Memory Hotfix
-------------------
On PowerShell v3.0, there is a bug that limits the amount of memory available to the WinRM service. Use the `Install-WMF3Hotfix.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1>`_ script to install a hotfix on affected hosts as part of the system bootstrapping or imaging process. Without this hotfix, Ansible fails to execute certain commands on the Windows host.
To install the hotfix:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1"
$file = "$env:temp\Install-WMF3Hotfix.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file -Verbose
For more details, refer to the `"Out of memory" error on a computer that has a customized MaxMemoryPerShellMB quota set and has WMF 3.0 installed <https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh>`_ article.
WinRM Setup
```````````
You need to configure the WinRM service so that Ansible can connect to it. There are two main components of the WinRM service that governs how Ansible can interface with the Windows host: the ``listener`` and the ``service`` configuration settings.
WinRM Listener
--------------
The WinRM services listen for requests on one or more ports. Each of these ports must have a listener created and configured.
To view the current listeners that are running on the WinRM service:
.. code-block:: powershell
winrm enumerate winrm/config/Listener
This will output something like:
.. code-block:: powershell
Listener
Address = *
Transport = HTTP
Port = 5985
Hostname
Enabled = true
URLPrefix = wsman
CertificateThumbprint
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
Listener
Address = *
Transport = HTTPS
Port = 5986
Hostname = SERVER2016
Enabled = true
URLPrefix = wsman
CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
In the example above there are two listeners activated. One is listening on port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of the key options that are useful to understand are:
* ``Transport``: Whether the listener is run over HTTP or HTTPS. We recommend you use a listener over HTTPS because the data is encrypted without any further changes required.
* ``Port``: The port the listener runs on. By default it is ``5985`` for HTTP and ``5986`` for HTTPS. This port can be changed to whatever is required and corresponds to the host var ``ansible_port``.
* ``URLPrefix``: The URL prefix to listen on. By default it is ``wsman``. If you change this option, you need to set the host var ``ansible_winrm_path`` to the same value.
* ``CertificateThumbprint``: If you use an HTTPS listener, this is the thumbprint of the certificate in the Windows Certificate Store that is used in the connection. To get the details of the certificate itself, run this command with the relevant certificate thumbprint in PowerShell:
.. code-block:: powershell
$thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object *
Setup WinRM Listener
++++++++++++++++++++
There are three ways to set up a WinRM listener:
* Using ``winrm quickconfig`` for HTTP or ``winrm quickconfig -transport:https`` for HTTPS. This is the easiest option to use when running outside of a domain environment and a simple listener is required. Unlike the other options, this process also has the added benefit of opening up the firewall for the ports required and starts the WinRM service.
* Using Group Policy Objects (GPO). This is the best way to create a listener when the host is a member of a domain because the configuration is done automatically without any user input. For more information on group policy objects, see the `Group Policy Objects documentation <https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx>`_.
* Using PowerShell to create a listener with a specific configuration. This can be done by running the following PowerShell commands:
.. code-block:: powershell
$selector_set = @{
Address = "*"
Transport = "HTTPS"
}
$value_set = @{
CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
}
New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set
To see the other options with this PowerShell command, refer to the
`New-WSManInstance <https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1>`_ documentation.
.. Note:: When creating an HTTPS listener, you must create and store a certificate in the ``LocalMachine\My`` certificate store.
Delete WinRM Listener
+++++++++++++++++++++
* To remove all WinRM listeners:
.. code-block:: powershell
Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force
* To remove only those listeners that run over HTTPS:
.. code-block:: powershell
Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force
.. Note:: The ``Keys`` object is an array of strings, so it can contain different values. By default, it contains a key for ``Transport=`` and ``Address=`` which correspond to the values from the ``winrm enumerate winrm/config/Listeners`` command.
WinRM Service Options
---------------------
You can control the behavior of the WinRM service component, including authentication options and memory settings.
To get an output of the current service configuration options, run the following command:
.. code-block:: powershell
winrm get winrm/config/Service
winrm get winrm/config/Winrs
This will output something like:
.. code-block:: powershell
Service
RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 1500
EnumerationTimeoutms = 240000
MaxConnections = 300
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = false
Auth
Basic = true
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
AllowRemoteAccess = true
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 7200000
MaxConcurrentUsers = 2147483647
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 2147483647
MaxMemoryPerShellMB = 2147483647
MaxShellsPerUser = 2147483647
You do not need to change the majority of these options. However, some of the important ones to know about are:
* ``Service\AllowUnencrypted`` - specifies whether WinRM will allow HTTP traffic without message encryption. Message level encryption is only possible when the ``ansible_winrm_transport`` variable is ``ntlm``, ``kerberos`` or ``credssp``. By default, this is ``false`` and you should only set it to ``true`` when debugging WinRM messages.
* ``Service\Auth\*`` - defines what authentication options you can use with the WinRM service. By default, ``Negotiate (NTLM)`` and ``Kerberos`` are enabled.
* ``Service\Auth\CbtHardeningLevel`` - specifies whether channel binding tokens are not verified (None), verified but not required (Relaxed), or verified and required (Strict). CBT is only used when connecting with NT LAN Manager (NTLM) or Kerberos over HTTPS.
* ``Service\CertificateThumbprint`` - thumbprint of the certificate for encrypting the TLS channel used with CredSSP authentication. By default, this is empty. A self-signed certificate is generated when the WinRM service starts and is used in the TLS process.
* ``Winrs\MaxShellRunTime`` - maximum time, in milliseconds, that a remote command is allowed to execute.
* ``Winrs\MaxMemoryPerShellMB`` - maximum amount of memory allocated per shell, including its child processes.
To modify a setting under the ``Service`` key in PowerShell, you need to provide a path to the option after ``winrm/config/Service``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\{path} -Value {some_value}
For example, to change ``Service\Auth\CbtHardeningLevel``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict
To modify a setting under the ``Winrs`` key in PowerShell, you need to provide a path to the option after ``winrm/config/Winrs``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Shell\{path} -Value {some_value}
For example, to change ``Winrs\MaxShellRunTime``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647
.. Note:: If you run the command in a domain environment, some of these options are set by
GPO and cannot be changed on the host itself. When you configured a key with GPO, it contains the text ``[Source="GPO"]`` next to the value.
Common WinRM Issues
-------------------
WinRM has a wide range of configuration options, which makes its configuration complex. As a result, errors that Ansible displays could in fact be problems with the host setup instead.
To identify a host issue, run the following command from another Windows host to connect to the target Windows host.
* To test HTTP:
.. code-block:: powershell
winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig
* To test HTTPS:
.. code-block:: powershell
winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig
The command will fail if the certificate is not verifiable.
* To test HTTPS ignoring certificate verification:
.. code-block:: powershell
$username = "Username"
$password = ConvertTo-SecureString -String "Password" -AsPlainText -Force
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
$session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option
If any of the above commands fail, the issue is probably related to the WinRM setup.
HTTP 401/Credentials Rejected
+++++++++++++++++++++++++++++
An HTTP 401 error indicates the authentication process failed during the initial
connection. You can check the following to troubleshoot:
* The credentials are correct and set properly in your inventory with the ``ansible_user`` and ``ansible_password`` variables.
* The user is a member of the local Administrators group, or has been explicitly granted access. You can perform a connection test with the ``winrs`` command to rule this out.
* The authentication option set by the ``ansible_winrm_transport`` variable is enabled under ``Service\Auth\*``.
* If running over HTTP and not HTTPS, use ``ntlm``, ``kerberos`` or ``credssp`` with the ``ansible_winrm_message_encryption: auto`` custom inventory variable to enable message encryption. If you use another authentication option, or if it is not possible to upgrade the installed ``pywinrm`` package, you can set ``Service\AllowUnencrypted`` to ``true``. This is recommended only for troubleshooting.
* The downstream packages ``pywinrm``, ``requests-ntlm``, ``requests-kerberos``, and/or ``requests-credssp`` are up to date using ``pip``.
* For Kerberos authentication, ensure that ``Service\Auth\CbtHardeningLevel`` is not set to ``Strict``.
* For Basic or Certificate authentication, make sure that the user is a local account. Domain accounts do not work with Basic and Certificate authentication.
HTTP 500 Error
++++++++++++++
An HTTP 500 error indicates a problem with the WinRM service. You can check the following to troubleshoot:
* The number of your currently open shells has not exceeded either ``WinRsMaxShellsPerUser``. Alternatively, you did not exceed any of the other Winrs quotas.
Timeout Errors
+++++++++++++++
Sometimes Ansible is unable to reach the host. These instances usually indicate a problem with the network connection. You can check the following to troubleshoot:
* The firewall is not set to block the configured WinRM listener ports.
* A WinRM listener is enabled on the port and path set by the host vars.
* The ``winrm`` service is running on the Windows host and is configured for the automatic start.
Connection Refused Errors
+++++++++++++++++++++++++
When you communicate with the WinRM service on the host you can encounter some problems. Check the following to help the troubleshooting:
* The WinRM service is up and running on the host. Use the ``(Get-Service -Name winrm).Status`` command to get the status of the service.
* The host firewall is allowing traffic over the WinRM port. By default this is ``5985`` for HTTP and ``5986`` for HTTPS.
Sometimes an installer may restart the WinRM or HTTP service and cause this error. The best way to deal with this is to use the ``win_psexec`` module from another Windows host.
Failure to Load Builtin Modules
+++++++++++++++++++++++++++++++
Sometimes PowerShell fails with an error message similar to:
.. code-block:: powershell
The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.
In that case, there could be a problem when trying to access all the paths specified by the ``PSModulePath`` environment variable.
A common cause of this issue is that ``PSModulePath`` contains a Universal Naming Convention (UNC) path to a file share. Additionally, the double hop/credential delegation issue causes that the Ansible process cannot access these folders. To work around this problem is to either:
* Remove the UNC path from ``PSModulePath``.
or
* Use an authentication option that supports credential delegation like ``credssp`` or ``kerberos``. You need to have the credential delegation enabled.
See `KB4076842 <https://support.microsoft.com/en-us/help/4076842>`_ for more information on this problem.
Windows SSH Setup
`````````````````
Ansible 2.8 has added an experimental SSH connection for Windows managed nodes.
.. warning::
Use this feature at your own risk! Using SSH with Windows is experimental. This implementation may make
backwards incompatible changes in future releases. The server-side components can be unreliable depending on your installed version.
Installing OpenSSH using Windows Settings
-----------------------------------------
You can use OpenSSH to connect Window 10 clients to Windows Server 2019. OpenSSH Client is available to install on Windows 10 build 1809 and later. OpenSSH Server is available to install on Windows Server 2019 and later.
For more information, refer to `Get started with OpenSSH for Windows <https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse>`_.
Installing Win32-OpenSSH
------------------------
To install the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_ service for use with
Ansible, select one of these installation options:
* Manually install ``Win32-OpenSSH``, following the `install instructions <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ from Microsoft.
* Use Chocolatey:
.. code-block:: powershell
choco install --package-parameters=/SSHServerFeature openssh
* Use the ``win_chocolatey`` Ansible module:
.. code-block:: yaml
- name: install the Win32-OpenSSH service
win_chocolatey:
name: openssh
package_params: /SSHServerFeature
state: present
* Install an Ansible Galaxy role for example `jborean93.win_openssh <https://galaxy.ansible.com/jborean93/win_openssh>`_:
.. code-block:: powershell
ansible-galaxy install jborean93.win_openssh
* Use the role in your playbook:
.. code-block:: yaml
- name: install Win32-OpenSSH service
hosts: windows
gather_facts: false
roles:
- role: jborean93.win_openssh
opt_openssh_setup_service: True
.. note:: ``Win32-OpenSSH`` is still a beta product and is constantly being updated to include new features and bugfixes. If you use SSH as a connection option for Windows, we highly recommend you install the latest version.
Configuring the Win32-OpenSSH shell
-----------------------------------
By default ``Win32-OpenSSH`` uses ``cmd.exe`` as a shell.
* To configure a different shell, use an Ansible playbook with a task to define the registry setting:
.. code-block:: yaml
- name: set the default shell to PowerShell
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
type: string
state: present
* To revert the settings back to the default shell:
.. code-block:: yaml
- name: set the default shell to cmd
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
state: absent
Win32-OpenSSH Authentication
----------------------------
Win32-OpenSSH authentication with Windows is similar to SSH authentication on Unix/Linux hosts. You can use a plaintext password or SSH public key authentication.
For the key-based authentication:
* Add your public keys to an ``authorized_key`` file in the ``.ssh`` folder of the user's profile directory.
* Configure the SSH service using the ``sshd_config`` file.
When using SSH key authentication with Ansible, the remote session will not have access to user credentials and will fail when attempting to access a network resource. This is also known as the double-hop or credential delegation issue. To work around this problem:
* Use plaintext password authentication by setting the ``ansible_password`` variable.
* Use the ``become`` directive on the task with the credentials of the user that needs access to the remote resource.
Configuring Ansible for SSH on Windows
--------------------------------------
To configure Ansible to use SSH for Windows hosts, you must set two connection variables:
* set ``ansible_connection`` to ``ssh``
* set ``ansible_shell_type`` to ``cmd`` or ``powershell``
The ``ansible_shell_type`` variable should reflect the ``DefaultShell`` configured on the Windows host. Set ``ansible_shell_type`` to ``cmd`` for the default shell. Alternatively, set ``ansible_shell_type`` to ``powershell`` if you changed ``DefaultShell`` to PowerShell.
Known issues with SSH on Windows
--------------------------------
Using SSH with Windows is experimental. Currently existing issues are:
* Win32-OpenSSH versions older than ``v7.9.0.0p1-Beta`` do not work when ``powershell`` is the shell type.
* While Secure Copy Protocol (SCP) should work, SSH File Transfer Protocol (SFTP) is the recommended mechanism to use when copying or fetching a file.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,422 |
Remove Windows 2012 R2 from ansible-test
|
### Summary
Windows Server 2012 R2 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80422
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:58Z |
python
| 2023-05-18T18:02:58Z |
test/lib/ansible_test/_data/completion/windows.txt
|
windows/2012 provider=aws arch=x86_64
windows/2012-R2 provider=aws arch=x86_64
windows/2016 provider=aws arch=x86_64
windows/2019 provider=aws arch=x86_64
windows/2022 provider=aws arch=x86_64
windows provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:3.0.0
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- test: 5
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.5
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.7 py36
test: rhel/[email protected]
- name: RHEL 8.7 py39
test: rhel/[email protected]
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2012
- test: 2012-R2
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/commands/incidental/windows.sh
|
#!/usr/bin/env bash
set -o pipefail -eux
declare -a args
IFS='/:' read -ra args <<< "$1"
version="${args[1]}"
target="shippable/windows/incidental/"
stage="${S:-prod}"
provider="${P:-default}"
# python version to run full tests on while other versions run minimal tests
python_default="$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(constants.CONTROLLER_MIN_PYTHON_VERSION)')"
# version to test when only testing a single version
single_version=2012-R2
# shellcheck disable=SC2086
ansible-test windows-integration --list-targets -v ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} > /tmp/explain.txt 2>&1 || { cat /tmp/explain.txt && false; }
{ grep ' windows-integration: .* (targeted)$' /tmp/explain.txt || true; } > /tmp/windows.txt
if [ -s /tmp/windows.txt ] || [ "${CHANGED:+$CHANGED}" == "" ]; then
echo "Detected changes requiring integration tests specific to Windows:"
cat /tmp/windows.txt
echo "Running Windows integration tests for multiple versions concurrently."
platforms=(
--windows "${version}"
)
else
echo "No changes requiring integration tests specific to Windows were detected."
echo "Running Windows integration tests for a single version only: ${single_version}"
if [ "${version}" != "${single_version}" ]; then
echo "Skipping this job since it is for: ${version}"
exit 0
fi
platforms=(
--windows "${version}"
)
fi
# shellcheck disable=SC2086
ansible-test windows-integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
"${platforms[@]}" \
--docker default --python "${python_default}" \
--remote-terminate always --remote-stage "${stage}" --remote-provider "${provider}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
.azure-pipelines/commands/windows.sh
|
#!/usr/bin/env bash
set -o pipefail -eux
declare -a args
IFS='/:' read -ra args <<< "$1"
version="${args[1]}"
group="${args[2]}"
target="shippable/windows/group${group}/"
stage="${S:-prod}"
provider="${P:-default}"
# python versions to test in order
IFS=' ' read -r -a python_versions <<< \
"$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(" ".join(constants.CONTROLLER_PYTHON_VERSIONS))')"
# python version to run full tests on while other versions run minimal tests
python_default="$(PYTHONPATH="${PWD}/test/lib" python -c 'from ansible_test._internal import constants; print(constants.CONTROLLER_MIN_PYTHON_VERSION)')"
# version to test when only testing a single version
single_version=2012-R2
# shellcheck disable=SC2086
ansible-test windows-integration --list-targets -v ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} > /tmp/explain.txt 2>&1 || { cat /tmp/explain.txt && false; }
{ grep ' windows-integration: .* (targeted)$' /tmp/explain.txt || true; } > /tmp/windows.txt
if [ -s /tmp/windows.txt ] || [ "${CHANGED:+$CHANGED}" == "" ]; then
echo "Detected changes requiring integration tests specific to Windows:"
cat /tmp/windows.txt
echo "Running Windows integration tests for multiple versions concurrently."
platforms=(
--windows "${version}"
)
else
echo "No changes requiring integration tests specific to Windows were detected."
echo "Running Windows integration tests for a single version only: ${single_version}"
if [ "${version}" != "${single_version}" ]; then
echo "Skipping this job since it is for: ${version}"
exit 0
fi
platforms=(
--windows "${version}"
)
fi
for version in "${python_versions[@]}"; do
changed_all_target="all"
changed_all_mode="default"
if [ "${version}" == "${python_default}" ]; then
# smoketest tests
if [ "${CHANGED}" ]; then
# with change detection enabled run tests for anything changed
# use the smoketest tests for any change that triggers all tests
ci="${target}"
changed_all_target="shippable/windows/smoketest/"
if [ "${target}" == "shippable/windows/group1/" ]; then
# only run smoketest tests for group1
changed_all_mode="include"
else
# smoketest tests already covered by group1
changed_all_mode="exclude"
fi
else
# without change detection enabled run entire test group
ci="${target}"
fi
else
# only run minimal tests for group1
if [ "${target}" != "shippable/windows/group1/" ]; then continue; fi
# minimal tests for other python versions
ci="shippable/windows/minimal/"
fi
# terminate remote instances on the final python version tested
if [ "${version}" = "${python_versions[-1]}" ]; then
terminate="always"
else
terminate="never"
fi
# shellcheck disable=SC2086
ansible-test windows-integration --color -v --retry-on-error "${ci}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
"${platforms[@]}" --changed-all-target "${changed_all_target}" --changed-all-mode "${changed_all_mode}" \
--docker default --python "${version}" \
--remote-terminate "${terminate}" --remote-stage "${stage}" --remote-provider "${provider}"
done
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
changelogs/fragments/server2012-deprecation.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
|
.. _developing_modules_general_windows:
**************************************
Windows module development walkthrough
**************************************
In this section, we will walk through developing, testing, and debugging an
Ansible Windows module.
Because Windows modules are written in Powershell and need to be run on a
Windows host, this guide differs from the usual development walkthrough guide.
What's covered in this section:
.. contents::
:local:
Windows environment setup
=========================
Unlike Python module development which can be run on the host that runs
Ansible, Windows modules need to be written and tested for Windows hosts.
While evaluation editions of Windows can be downloaded from
Microsoft, these images are usually not ready to be used by Ansible without
further modification. The easiest way to set up a Windows host so that it is
ready to by used by Ansible is to set up a virtual machine using Vagrant.
Vagrant can be used to download existing OS images called *boxes* that are then
deployed to a hypervisor like VirtualBox. These boxes can either be created and
stored offline or they can be downloaded from a central repository called
Vagrant Cloud.
This guide will use the Vagrant boxes created by the `packer-windoze <https://github.com/jborean93/packer-windoze>`_
repository which have also been uploaded to `Vagrant Cloud <https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=jborean93>`_.
To find out more info on how these images are created, please go to the GitHub
repo and look at the ``README`` file.
Before you can get started, the following programs must be installed (please consult the Vagrant and
VirtualBox documentation for installation instructions):
- Vagrant
- VirtualBox
Create a Windows server in a VM
===============================
To create a single Windows Server 2016 instance, run the following:
.. code-block:: shell
vagrant init jborean93/WindowsServer2016
vagrant up
This will download the Vagrant box from Vagrant Cloud and add it to the local
boxes on your host and then start up that instance in VirtualBox. When starting
for the first time, the Windows VM will run through the sysprep process and
then create a HTTP and HTTPS WinRM listener automatically. Vagrant will finish
its process once the listeners are online, after which the VM can be used by Ansible.
Create an Ansible inventory
===========================
The following Ansible inventory file can be used to connect to the newly
created Windows VM:
.. code-block:: ini
[windows]
WindowsServer ansible_host=127.0.0.1
[windows:vars]
ansible_user=vagrant
ansible_password=vagrant
ansible_port=55986
ansible_connection=winrm
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
.. note:: The port ``55986`` is automatically forwarded by Vagrant to the
Windows host that was created, if this conflicts with an existing local
port then Vagrant will automatically use another one at random and display
show that in the output.
The OS that is created is based on the image set. The following
images can be used:
- `jborean93/WindowsServer2012 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012>`_
- `jborean93/WindowsServer2012R2 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012R2>`_
- `jborean93/WindowsServer2016 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2016>`_
- `jborean93/WindowsServer2019 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2019>`_
- `jborean93/WindowsServer2022 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2022>`_
When the host is online, it can accessible by RDP on ``127.0.0.1:3389`` but the
port may differ depending if there was a conflict. To get rid of the host, run
``vagrant destroy --force`` and Vagrant will automatically remove the VM and
any other files associated with that VM.
While this is useful when testing modules on a single Windows instance, these
host won't work without modification with domain based modules. The Vagrantfile
at `ansible-windows <https://github.com/jborean93/ansible-windows/tree/master/vagrant>`_
can be used to create a test domain environment to be used in Ansible. This
repo contains three files which are used by both Ansible and Vagrant to create
multiple Windows hosts in a domain environment. These files are:
- ``Vagrantfile``: The Vagrant file that reads the inventory setup of ``inventory.yml`` and provisions the hosts that are required
- ``inventory.yml``: Contains the hosts that are required and other connection information such as IP addresses and forwarded ports
- ``main.yml``: Ansible playbook called by Vagrant to provision the domain controller and join the child hosts to the domain
By default, these files will create the following environment:
- A single domain controller running on Windows Server 2016
- Five child hosts for each major Windows Server version joined to that domain
- A domain with the DNS name ``domain.local``
- A local administrator account on each host with the username ``vagrant`` and password ``vagrant``
- A domain admin account ``[email protected]`` with the password ``VagrantPass1``
The domain name and accounts can be modified by changing the variables
``domain_*`` in the ``inventory.yml`` file if it is required. The inventory
file can also be modified to provision more or less servers by changing the
hosts that are defined under the ``domain_children`` key. The host variable
``ansible_host`` is the private IP that will be assigned to the VirtualBox host
only network adapter while ``vagrant_box`` is the box that will be used to
create the VM.
Provisioning the environment
============================
To provision the environment as is, run the following:
.. code-block:: shell
git clone https://github.com/jborean93/ansible-windows.git
cd vagrant
vagrant up
.. note:: Vagrant provisions each host sequentially so this can take some time
to complete. If any errors occur during the Ansible phase of setting up the
domain, run ``vagrant provision`` to rerun just that step.
Unlike setting up a single Windows instance with Vagrant, these hosts can also
be accessed using the IP address directly as well as through the forwarded
ports. It is easier to access it over the host only network adapter as the
normal protocol ports are used, for example RDP is still over ``3389``. In cases where
the host cannot be resolved using the host only network IP, the following
protocols can be access over ``127.0.0.1`` using these forwarded ports:
- ``RDP``: 295xx
- ``SSH``: 296xx
- ``WinRM HTTP``: 297xx
- ``WinRM HTTPS``: 298xx
- ``SMB``: 299xx
Replace ``xx`` with the entry number in the inventory file where the domain
controller started with ``00`` and is incremented from there. For example, in
the default ``inventory.yml`` file, WinRM over HTTPS for ``SERVER2012R2`` is
forwarded over port ``29804`` as it's the fourth entry in ``domain_children``.
Windows new module development
==============================
When creating a new module there are a few things to keep in mind:
- Module code is in Powershell (.ps1) files while the documentation is contained in Python (.py) files of the same name
- Avoid using ``Write-Host/Debug/Verbose/Error`` in the module and add what needs to be returned to the ``$module.Result`` variable
- To fail a module, call ``$module.FailJson("failure message here")``, an Exception or ErrorRecord can be set to the second argument for a more descriptive error message
- You can pass in the exception or ErrorRecord as a second argument to ``FailJson("failure", $_)`` to get a more detailed output
- Most new modules require check mode and integration tests before they are merged into the main Ansible codebase
- Avoid using try/catch statements over a large code block, rather use them for individual calls so the error message can be more descriptive
- Try and catch specific exceptions when using try/catch statements
- Avoid using PSCustomObjects unless necessary
- Look for common functions in ``./lib/ansible/module_utils/powershell/`` and use the code there instead of duplicating work. These can be imported by adding the line ``#Requires -Module *`` where * is the filename to import, and will be automatically included with the module code sent to the Windows target when run through Ansible
- As well as PowerShell module utils, C# module utils are stored in ``./lib/ansible/module_utils/csharp/`` and are automatically imported in a module execution if the line ``#AnsibleRequires -CSharpUtil *`` is present
- C# and PowerShell module utils achieve the same goal but C# allows a developer to implement low level tasks, such as calling the Win32 API, and can be faster in some cases
- Ensure the code runs under Powershell v3 and higher on Windows Server 2012 and higher; if higher minimum Powershell or OS versions are required, ensure the documentation reflects this clearly
- Ansible runs modules under strictmode version 2.0. Be sure to test with that enabled by putting ``Set-StrictMode -Version 2.0`` at the top of your dev script
- Favor native Powershell cmdlets over executable calls if possible
- Use the full cmdlet name instead of aliases, for example ``Remove-Item`` over ``rm``
- Use named parameters with cmdlets, for example ``Remove-Item -Path C:\temp`` over ``Remove-Item C:\temp``
A very basic Powershell module `win_environment <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_environment.ps1>`_ incorporates best practices for Powershell modules. It demonstrates how to implement check-mode and diff-support, and also shows a warning to the user when a specific condition is met.
A slightly more advanced module is `win_uri <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_uri.ps1>`_ which additionally shows how to use different parameter types (bool, str, int, list, dict, path) and a selection of choices for parameters, how to fail a module and how to handle exceptions.
As part of the new ``AnsibleModule`` wrapper, the input parameters are defined and validated based on an argument
spec. The following options can be set at the root level of the argument spec:
- ``mutually_exclusive``: A list of lists, where the inner list contains module options that cannot be set together
- ``no_log``: Stops the module from emitting any logs to the Windows Event log
- ``options``: A dictionary where the key is the module option and the value is the spec for that option
- ``required_by``: A dictionary where the option(s) specified by the value must be set if the option specified by the key is also set
- ``required_if``: A list of lists where the inner list contains 3 or 4 elements;
* The first element is the module option to check the value against
* The second element is the value of the option specified by the first element, if matched then the required if check is run
* The third element is a list of required module options when the above is matched
* An optional fourth element is a boolean that states whether all module options in the third elements are required (default: ``$false``) or only one (``$true``)
- ``required_one_of``: A list of lists, where the inner list contains module options where at least one must be set
- ``required_together``: A list of lists, where the inner list contains module options that must be set together
- ``supports_check_mode``: Whether the module supports check mode, by default this is ``$false``
The actual input options for a module are set within the ``options`` value as a dictionary. The keys of this dictionary
are the module option names while the values are the spec of that module option. Each spec can have the following
options set:
- ``aliases``: A list of aliases for the module option
- ``choices``: A list of valid values for the module option, if ``type=list`` then each list value is validated against the choices and not the list itself
- ``default``: The default value for the module option if not set
- ``deprecated_aliases``: A list of hashtables that define aliases that are deprecated and the versions they will be removed in. Each entry must contain the keys ``name`` and ``collection_name`` with either ``version`` or ``date``
- ``elements``: When ``type=list``, this sets the type of each list value, the values are the same as ``type``
- ``no_log``: Will sanitise the input value before being returned in the ``module_invocation`` return value
- ``removed_in_version``: States when a deprecated module option is to be removed, a warning is displayed to the end user if set
- ``removed_at_date``: States the date (YYYY-MM-DD) when a deprecated module option will be removed, a warning is displayed to the end user if set
- ``removed_from_collection``: States from which collection the deprecated module option will be removed; must be specified if one of ``removed_in_version`` and ``removed_at_date`` is specified
- ``required``: Will fail when the module option is not set
- ``type``: The type of the module option, if not set then it defaults to ``str``. The valid types are;
* ``bool``: A boolean value
* ``dict``: A dictionary value, if the input is a JSON or key=value string then it is converted to dictionary
* ``float``: A float or `Single <https://docs.microsoft.com/en-us/dotnet/api/system.single?view=netframework-4.7.2>`_ value
* ``int``: An Int32 value
* ``json``: A string where the value is converted to a JSON string if the input is a dictionary
* ``list``: A list of values, ``elements=<type>`` can convert the individual list value types if set. If ``elements=dict`` then ``options`` is defined, the values will be validated against the argument spec. When the input is a string then the string is split by ``,`` and any whitespace is trimmed
* ``path``: A string where values likes ``%TEMP%`` are expanded based on environment values. If the input value starts with ``\\?\`` then no expansion is run
* ``raw``: No conversions occur on the value passed in by Ansible
* ``sid``: Will convert Windows security identifier values or Windows account names to a `SecurityIdentifier <https://docs.microsoft.com/en-us/dotnet/api/system.security.principal.securityidentifier?view=netframework-4.7.2>`_ value
* ``str``: The value is converted to a string
When ``type=dict``, or ``type=list`` and ``elements=dict``, the following keys can also be set for that module option:
- ``apply_defaults``: The value is based on the ``options`` spec defaults for that key if ``True`` and null if ``False``. Only valid when the module option is not defined by the user and ``type=dict``.
- ``mutually_exclusive``: Same as the root level ``mutually_exclusive`` but validated against the values in the sub dict
- ``options``: Same as the root level ``options`` but contains the valid options for the sub option
- ``required_if``: Same as the root level ``required_if`` but validated against the values in the sub dict
- ``required_by``: Same as the root level ``required_by`` but validated against the values in the sub dict
- ``required_together``: Same as the root level ``required_together`` but validated against the values in the sub dict
- ``required_one_of``: Same as the root level ``required_one_of`` but validated against the values in the sub dict
A module type can also be a delegate function that converts the value to whatever is required by the module option. For
example the following snippet shows how to create a custom type that creates a ``UInt64`` value:
.. code-block:: powershell
$spec = @{
uint64_type = @{ type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) } }
}
$uint64_type = $module.Params.uint64_type
When in doubt, look at some of the other core modules and see how things have been
implemented there.
Sometimes there are multiple ways that Windows offers to complete a task; this
is the order to favor when writing modules:
- Native Powershell cmdlets like ``Remove-Item -Path C:\temp -Recurse``
- .NET classes like ``[System.IO.Path]::GetRandomFileName()``
- WMI objects through the ``New-CimInstance`` cmdlet
- COM objects through ``New-Object -ComObject`` cmdlet
- Calls to native executables like ``Secedit.exe``
PowerShell modules support a small subset of the ``#Requires`` options built
into PowerShell as well as some Ansible-specific requirements specified by
``#AnsibleRequires``. These statements can be placed at any point in the script,
but are most commonly near the top. They are used to make it easier to state the
requirements of the module without writing any of the checks. Each ``requires``
statement must be on its own line, but there can be multiple requires statements
in one script.
These are the checks that can be used within Ansible modules:
- ``#Requires -Module Ansible.ModuleUtils.<module_util>``: Added in Ansible 2.4, specifies a module_util to load in for the module execution.
- ``#Requires -Version x.y``: Added in Ansible 2.5, specifies the version of PowerShell that is required by the module. The module will fail if this requirement is not met.
- ``#AnsibleRequires -PowerShell <module_util>``: Added in Ansible 2.8, like ``#Requires -Module``, this specifies a module_util to load in for module execution.
- ``#AnsibleRequires -CSharpUtil <module_util>``: Added in Ansible 2.8, specifies a C# module_util to load in for the module execution.
- ``#AnsibleRequires -OSVersion x.y``: Added in Ansible 2.5, specifies the OS build version that is required by the module and will fail if this requirement is not met. The actual OS version is derived from ``[Environment]::OSVersion.Version``.
- ``#AnsibleRequires -Become``: Added in Ansible 2.5, forces the exec runner to run the module with ``become``, which is primarily used to bypass WinRM restrictions. If ``ansible_become_user`` is not specified then the ``SYSTEM`` account is used instead.
The ``#AnsibleRequires -PowerShell`` and ``#AnsibleRequires -CSharpUtil``
support further features such as:
- Importing a util contained in a collection (added in Ansible 2.9)
- Importing a util by relative names (added in Ansible 2.10)
- Specifying the util is optional by adding `-Optional` to the import
declaration (added in Ansible 2.12).
See the below examples for more details:
.. code-block:: powershell
# Imports the PowerShell Ansible.ModuleUtils.Legacy provided by Ansible itself
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Legacy
# Imports the PowerShell my_util in the my_namesapce.my_name collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the PowerShell my_util that exists in the same collection as the current module
#AnsibleRequires -PowerShell ..module_utils.my_util
# Imports the PowerShell Ansible.ModuleUtils.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Optional -Optional
# Imports the C# Ansible.Process provided by Ansible itself
#AnsibleRequires -CSharpUtil Ansible.Process
# Imports the C# my_util in the my_namespace.my_name collection
#AnsibleRequires -CSharpUtil ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the C# my_util that exists in the same collection as the current module
#AnsibleRequires -CSharpUtil ..module_utils.my_util
# Imports the C# Ansible.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -CSharpUtil Ansible.Optional -Optional
For optional require statements, it is up to the module code to then verify
whether the util has been imported before trying to use it. This can be done by
checking if a function or type provided by the util exists or not.
While both ``#Requires -Module`` and ``#AnsibleRequires -PowerShell`` can be
used to load a PowerShell module it is recommended to use ``#AnsibleRequires``.
This is because ``#AnsibleRequires`` supports collection module utils, imports
by relative util names, and optional util imports.
C# module utils can reference other C# utils by adding the line
``using Ansible.<module_util>;`` to the top of the script with all the other
using statements.
Windows module utilities
========================
Like Python modules, PowerShell modules also provide a number of module
utilities that provide helper functions within PowerShell. These module_utils
can be imported by adding the following line to a PowerShell module:
.. code-block:: powershell
#Requires -Module Ansible.ModuleUtils.Legacy
This will import the module_util at ``./lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1``
and enable calling all of its functions. As of Ansible 2.8, Windows module
utils can also be written in C# and stored at ``lib/ansible/module_utils/csharp``.
These module_utils can be imported by adding the following line to a PowerShell
module:
.. code-block:: powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
This will import the module_util at ``./lib/ansible/module_utils/csharp/Ansible.Basic.cs``
and automatically load the types in the executing process. C# module utils can
reference each other and be loaded together by adding the following line to the
using statements at the top of the util:
.. code-block:: csharp
using Ansible.Become;
There are special comments that can be set in a C# file for controlling the
compilation parameters. The following comments can be added to the script;
- ``//AssemblyReference -Name <assembly dll> [-CLR [Core|Framework]]``: The assembly DLL to reference during compilation, the optional ``-CLR`` flag can also be used to state whether to reference when running under .NET Core, Framework, or both (if omitted)
- ``//NoWarn -Name <error id> [-CLR [Core|Framework]]``: A compiler warning ID to ignore when compiling the code, the optional ``-CLR`` works the same as above. A list of warnings can be found at `Compiler errors <https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/index>`_
As well as this, the following pre-processor symbols are defined;
- ``CORECLR``: This symbol is present when PowerShell is running through .NET Core
- ``WINDOWS``: This symbol is present when PowerShell is running on Windows
- ``UNIX``: This symbol is present when PowerShell is running on Unix
A combination of these flags help to make a module util interoperable on both
.NET Framework and .NET Core, here is an example of them in action:
.. code-block:: csharp
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
//AssemblyReference -Name Newtonsoft.Json.dll -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
// Ignore error CS1702 for all .NET types
//NoWarn -Name CS1702
// Ignore error CS1956 only for .NET Framework
//NoWarn -Name CS1956 -CLR Framework
The following is a list of module_utils that are packaged with Ansible and a general description of what
they do:
- ArgvParser: Utility used to convert a list of arguments to an escaped string compliant with the Windows argument parsing rules.
- CamelConversion: Utility used to convert camelCase strings/lists/dicts to snake_case.
- CommandUtil: Utility used to execute a Windows process and return the stdout/stderr and rc as separate objects.
- FileUtil: Utility that expands on the ``Get-ChildItem`` and ``Test-Path`` to work with special files like ``C:\pagefile.sys``.
- Legacy: General definitions and helper utilities for Ansible module.
- LinkUtil: Utility to create, remove, and get information about symbolic links, junction points and hard inks.
- SID: Utilities used to convert a user or group to a Windows SID and vice versa.
For more details on any specific module utility and their requirements, please see the `Ansible
module utilities source code <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils/powershell>`_.
PowerShell module utilities can be stored outside of the standard Ansible
distribution for use with custom modules. Custom module_utils are placed in a
folder called ``module_utils`` located in the root folder of the playbook or role
directory.
C# module utilities can also be stored outside of the standard Ansible distribution for use with custom modules. Like
PowerShell utils, these are stored in a folder called ``module_utils`` and the filename must end in the extension
``.cs``, start with ``Ansible.`` and be named after the namespace defined in the util.
The below example is a role structure that contains two PowerShell custom module_utils called
``Ansible.ModuleUtils.ModuleUtil1``, ``Ansible.ModuleUtils.ModuleUtil2``, and a C# util containing the namespace
``Ansible.CustomUtil``:
.. code-block:: console
meta/
main.yml
defaults/
main.yml
module_utils/
Ansible.ModuleUtils.ModuleUtil1.psm1
Ansible.ModuleUtils.ModuleUtil2.psm1
Ansible.CustomUtil.cs
tasks/
main.yml
Each PowerShell module_util must contain at least one function that has been exported with ``Export-ModuleMember``
at the end of the file. For example
.. code-block:: powershell
Export-ModuleMember -Function Invoke-CustomUtil, Get-CustomInfo
Exposing shared module options
++++++++++++++++++++++++++++++
PowerShell module utils can easily expose common module options that a module can use when building its argument spec.
This allows common features to be stored and maintained in one location and have those features used by multiple
modules with minimal effort. Any new features or bugfixes added to one of these utils are then automatically used by
the various modules that call that util.
An example of this would be to have a module util that handles authentication and communication against an API This
util can be used by multiple modules to expose a common set of module options like the API endpoint, username,
password, timeout, cert validation, and so on without having to add those options to each module spec.
The standard convention for a module util that has a shared argument spec would have
- A ``Get-<namespace.name.util name>Spec`` function that outputs the common spec for a module
* It is highly recommended to make this function name be unique to the module to avoid any conflicts with other utils that can be loaded
* The format of the output spec is a Hashtable in the same format as the ``$spec`` used for normal modules
- A function that takes in an ``AnsibleModule`` object called under the ``-Module`` parameter which it can use to get the shared options
Because these options can be shared across various module it is highly recommended to keep the module option names and
aliases in the shared spec as specific as they can be. For example do not have a util option called ``password``,
rather you should prefix it with a unique name like ``acme_password``.
.. warning::
Failure to have a unique option name or alias can prevent the util being used by module that also use those names or
aliases for its own options.
The following is an example module util called ``ServiceAuth.psm1`` in a collection that implements a common way for
modules to authentication with a service.
.. code-block:: powershell
Invoke-MyServiceResource {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[String]
$ResourceId,
[String]
$State = 'present'
)
# Process the common module options known to the util
$params = @{
ServerUri = $Module.Params.my_service_url
}
if ($Module.Params.my_service_username) {
$params.Credential = Get-MyServiceCredential
}
if ($State -eq 'absent') {
Remove-MyService @params -ResourceId $ResourceId
} else {
New-MyService @params -ResourceId $ResourceId
}
}
Get-MyNamespaceMyCollectionServiceAuthSpec {
# Output the util spec
@{
options = @{
my_service_url = @{ type = 'str'; required = $true }
my_service_username = @{ type = 'str' }
my_service_password = @{ type = 'str'; no_log = $true }
}
required_together = @(
,@('my_service_username', 'my_service_password')
)
}
}
$exportMembers = @{
Function = 'Get-MyNamespaceMyCollectionServiceAuthSpec', 'Invoke-MyServiceResource'
}
Export-ModuleMember @exportMembers
For a module to take advantage of this common argument spec it can be set out like
.. code-block:: powershell
#!powershell
# Include the module util ServiceAuth.psm1 from the my_namespace.my_collection collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_collection.plugins.module_utils.ServiceAuth
# Create the module spec like normal
$spec = @{
options = @{
resource_id = @{ type = 'str'; required = $true }
state = @{ type = 'str'; choices = 'absent', 'present' }
}
}
# Create the module from the module spec but also include the util spec to merge into our own.
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-MyNamespaceMyCollectionServiceAuthSpec))
# Call the ServiceAuth module util and pass in the module object so it can access the module options.
Invoke-MyServiceResource -Module $module -ResourceId $module.Params.resource_id -State $module.params.state
$module.ExitJson()
.. note::
Options defined in the module spec will always have precedence over a util spec. Any list values under the same key
in a util spec will be appended to the module spec for that same key. Dictionary values will add any keys that are
missing from the module spec and merge any values that are lists or dictionaries. This is similar to how the doc
fragment plugins work when extending module documentation.
To document these shared util options for a module, create a doc fragment plugin that documents the options implemented
by the module util and extend the module docs for every module that implements the util to include that fragment in
its docs.
Windows playbook module testing
===============================
You can test a module with an Ansible playbook. For example:
- Create a playbook in any directory ``touch testmodule.yml``.
- Create an inventory file in the same directory ``touch hosts``.
- Populate the inventory file with the variables required to connect to a Windows host(s).
- Add the following to the new playbook file:
.. code-block:: yaml
---
- name: test out windows module
hosts: windows
tasks:
- name: test out module
win_module:
name: test name
- Run the playbook ``ansible-playbook -i hosts testmodule.yml``
This can be useful for seeing how Ansible runs with
the new module end to end. Other possible ways to test the module are
shown below.
Windows debugging
=================
Debugging a module currently can only be done on a Windows host. This can be
useful when developing a new module or implementing bug fixes. These
are some steps that need to be followed to set this up:
- Copy the module script to the Windows server
- Copy the folders ``./lib/ansible/module_utils/powershell`` and ``./lib/ansible/module_utils/csharp`` to the same directory as the script above
- Add an extra ``#`` to the start of any ``#Requires -Module`` lines in the module code, this is only required for any lines starting with ``#Requires -Module``
- Add the following to the start of the module script that was copied to the server:
.. code-block:: powershell
# Set $ErrorActionPreference to what's set during Ansible execution
$ErrorActionPreference = "Stop"
# Set the first argument as the path to a JSON file that contains the module args
$args = @("$($pwd.Path)\args.json")
# Or instead of an args file, set $complex_args to the pre-processed module args
$complex_args = @{
_ansible_check_mode = $false
_ansible_diff = $false
path = "C:\temp"
state = "present"
}
# Import any C# utils referenced with '#AnsibleRequires -CSharpUtil' or 'using Ansible.;
# The $_csharp_utils entries should be the context of the C# util files and not the path
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.AddType.psm1"
$_csharp_utils = @(
[System.IO.File]::ReadAllText("$($pwd.Path)\csharp\Ansible.Basic.cs")
)
Add-CSharpType -References $_csharp_utils -IncludeDebugInfo
# Import any PowerShell modules referenced with '#Requires -Module`
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.Legacy.psm1"
# End of the setup code and start of the module code
#!powershell
You can add more args to ``$complex_args`` as required by the module or define the module options through a JSON file
with the structure:
.. code-block:: json
{
"ANSIBLE_MODULE_ARGS": {
"_ansible_check_mode": false,
"_ansible_diff": false,
"path": "C:\\temp",
"state": "present"
}
}
There are multiple IDEs that can be used to debug a Powershell script, two of
the most popular ones are
- `Powershell ISE`_
- `Visual Studio Code`_
.. _Powershell ISE: https://docs.microsoft.com/en-us/powershell/scripting/core-powershell/ise/how-to-debug-scripts-in-windows-powershell-ise
.. _Visual Studio Code: https://blogs.technet.microsoft.com/heyscriptingguy/2017/02/06/debugging-powershell-script-in-visual-studio-code-part-1/
To be able to view the arguments as passed by Ansible to the module follow
these steps.
- Prefix the Ansible command with :envvar:`ANSIBLE_KEEP_REMOTE_FILES=1<ANSIBLE_KEEP_REMOTE_FILES>` to specify that Ansible should keep the exec files on the server.
- Log onto the Windows server using the same user account that Ansible used to execute the module.
- Navigate to ``%TEMP%\..``. It should contain a folder starting with ``ansible-tmp-``.
- Inside this folder, open the PowerShell script for the module.
- In this script is a raw JSON script under ``$json_raw`` which contains the module arguments under ``module_args``. These args can be assigned manually to the ``$complex_args`` variable that is defined on your debug script or put in the ``args.json`` file.
Windows unit testing
====================
Currently there is no mechanism to run unit tests for Powershell modules under Ansible CI.
Windows integration testing
===========================
Integration tests for Ansible modules are typically written as Ansible roles. These test
roles are located in ``./test/integration/targets``. You must first set up your testing
environment, and configure a test inventory for Ansible to connect to.
In this example we will set up a test inventory to connect to two hosts and run the integration
tests for win_stat:
- Run the command ``source ./hacking/env-setup`` to prepare environment.
- Create a copy of ``./test/integration/inventory.winrm.template`` and name it ``inventory.winrm``.
- Fill in entries under ``[windows]`` and set the required variables that are needed to connect to the host.
- :ref:`Install the required Python modules <windows_winrm>` to support WinRM and a configured authentication method.
- To execute the integration tests, run ``ansible-test windows-integration win_stat``; you can replace ``win_stat`` with the role you want to test.
This will execute all the tests currently defined for that role. You can set
the verbosity level using the ``-v`` argument just as you would with
ansible-playbook.
When developing tests for a new module, it is recommended to test a scenario once in
check mode and twice not in check mode. This ensures that check mode
does not make any changes but reports a change, as well as that the second run is
idempotent and does not report changes. For example:
.. code-block:: yaml
- name: remove a file (check mode)
win_file:
path: C:\temp
state: absent
register: remove_file_check
check_mode: true
- name: get result of remove a file (check mode)
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual_check
- name: assert remove a file (check mode)
assert:
that:
- remove_file_check is changed
- remove_file_actual_check.stdout == 'true\r\n'
- name: remove a file
win_file:
path: C:\temp
state: absent
register: remove_file
- name: get result of remove a file
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual
- name: assert remove a file
assert:
that:
- remove_file is changed
- remove_file_actual.stdout == 'false\r\n'
- name: remove a file (idempotent)
win_file:
path: C:\temp
state: absent
register: remove_file_again
- name: assert remove a file (idempotent)
assert:
that:
- not remove_file_again is changed
Windows communication and development support
=============================================
Join the ``#ansible-devel`` or ``#ansible-windows`` chat channels (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) for discussions about Ansible development for Windows.
For questions and discussions pertaining to using the Ansible product,
use the ``#ansible`` channel.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/dev_guide/testing/sanity/integration-aliases.rst
|
integration-aliases
===================
Integration tests are executed by ``ansible-test`` and reside in directories under ``test/integration/targets/``.
Each test MUST have an ``aliases`` file to control test execution.
Aliases are explained in the following sections. Each alias must be on a separate line in an ``aliases`` file.
Groups
------
Tests must be configured to run in exactly one group. This is done by adding the appropriate group to the ``aliases`` file.
The following are examples of some of the available groups:
- ``shippable/posix/group1``
- ``shippable/windows/group2``
- ``shippable/azure/group3``
- ``shippable/aws/group1``
- ``shippable/cloud/group1``
Groups are used to balance tests across multiple CI jobs to minimize test run time.
They also improve efficiency by keeping tests with similar requirements running together.
When selecting a group for a new test, use the same group as existing tests similar to the one being added.
If more than one group is available, select one randomly.
Setup
-----
Aliases can be used to execute setup targets before running tests:
- ``setup/once/TARGET`` - Run the target ``TARGET`` before the first target that requires it.
- ``setup/always/TARGET`` - Run the target ``TARGET`` before each target that requires it.
Requirements
------------
Aliases can be used to express some test requirements:
- ``needs/privileged`` - Requires ``--docker-privileged`` when running tests with ``--docker``.
- ``needs/root`` - Requires running tests as ``root`` or with ``--docker``.
- ``needs/ssh`` - Requires SSH connections to localhost (or the test container with ``--docker``) without a password.
- ``needs/httptester`` - Requires use of the http-test-container to run tests.
Dependencies
------------
Some test dependencies are automatically discovered:
- Ansible role dependencies defined in ``meta/main.yml`` files.
- Setup targets defined with ``setup/*`` aliases.
- Symbolic links from one target to a file in another target.
Aliases can be used to declare dependencies that are not handled automatically:
- ``needs/target/TARGET`` - Requires use of the test target ``TARGET``.
- ``needs/file/PATH`` - Requires use of the file ``PATH`` relative to the git root.
Skipping
--------
Aliases can be used to skip platforms using one of the following:
- ``skip/freebsd`` - Skip tests on FreeBSD.
- ``skip/macos`` - Skip tests on macOS.
- ``skip/rhel`` - Skip tests on RHEL.
- ``skip/docker`` - Skip tests when running in a Docker container.
Platform versions, as specified using the ``--remote`` option with ``/`` removed, can also be skipped:
- ``skip/freebsd11.1`` - Skip tests on FreeBSD 11.1.
- ``skip/rhel7.6`` - Skip tests on RHEL 7.6.
Windows versions, as specified using the ``--windows`` option can also be skipped:
- ``skip/windows/2012`` - Skip tests on Windows Server 2012.
- ``skip/windows/2012-R2`` - Skip tests on Windows Server 2012 R2.
Aliases can be used to skip Python major versions using one of the following:
- ``skip/python2`` - Skip tests on Python 2.x.
- ``skip/python3`` - Skip tests on Python 3.x.
For more fine grained skipping, use conditionals in integration test playbooks, such as:
.. code-block:: yaml
when: ansible_distribution in ('Ubuntu')
Miscellaneous
-------------
There are several other aliases available as well:
- ``destructive`` - Requires ``--allow-destructive`` to run without ``--docker`` or ``--remote``.
- ``hidden`` - Target is ignored. Usable as a dependency. Automatic for ``setup_`` and ``prepare_`` prefixed targets.
- ``retry/never`` - Target is excluded from retries enabled by the ``--retry-on-error`` option.
Unstable
--------
Tests which fail sometimes should be marked with the ``unstable`` alias until the instability has been fixed.
These tests will continue to run for pull requests which modify the test or the module under test.
This avoids unnecessary test failures for other pull requests, as well as tests on merge runs and nightly CI jobs.
There are two ways to run unstable tests manually:
- Use the ``--allow-unstable`` option for ``ansible-test``
- Prefix the test name with ``unstable/`` when passing it to ``ansible-test``.
Tests will be marked as unstable by a member of the Ansible Core Team.
GitHub issues_ will be created to track each unstable test.
Disabled
--------
Tests which always fail should be marked with the ``disabled`` alias until they can be fixed.
Disabled tests are automatically skipped.
There are two ways to run disabled tests manually:
- Use the ``--allow-disabled`` option for ``ansible-test``
- Prefix the test name with ``disabled/`` when passing it to ``ansible-test``.
Tests will be marked as disabled by a member of the Ansible Core Team.
GitHub issues_ will be created to track each disabled test.
Unsupported
-----------
Tests which cannot be run in CI should be marked with the ``unsupported`` alias.
Most tests can be supported through the use of simulators and/or cloud plugins.
However, if that is not possible then marking a test as unsupported will prevent it from running in CI.
There are two ways to run unsupported tests manually:
* Use the ``--allow-unsupported`` option for ``ansible-test``
* Prefix the test name with ``unsupported/`` when passing it to ``ansible-test``.
Tests will be marked as unsupported by the contributor of the test.
Cloud
-----
Tests for cloud services and other modules that require access to external APIs usually require special support for testing in CI.
These require an additional alias to indicate the required test plugin.
Some of the available aliases are:
- ``cloud/aws``
- ``cloud/azure``
- ``cloud/cs``
- ``cloud/digitalocean``
- ``cloud/openshift``
- ``cloud/vcenter``
Untested
--------
Every module and plugin should have integration tests, even if the tests cannot be run in CI.
Issues
------
Tests that are marked as unstable_ or disabled_ will have an issue created to track the status of the test.
Each issue will be assigned to one of the following projects:
- `AWS <https://github.com/ansible/ansible/projects/21>`_
- `Azure <https://github.com/ansible/ansible/projects/22>`_
- `Windows <https://github.com/ansible/ansible/projects/23>`_
- `General <https://github.com/ansible/ansible/projects/25>`_
Questions
---------
For questions about integration tests reach out to @mattclay or @gundalow on GitHub or the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/os_guide/windows_faq.rst
|
.. _windows_faq:
Windows Frequently Asked Questions
==================================
Here are some commonly asked questions in regards to Ansible and Windows and
their answers.
.. note:: This document covers questions about managing Microsoft Windows servers with Ansible.
For questions about Ansible Core, please see the
:ref:`general FAQ page <ansible_faq>`.
Does Ansible work with Windows XP or Server 2003?
``````````````````````````````````````````````````
Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
* Windows Server 2008 :sup:`1`
* Windows Server 2008 R2 :sup:`1`
* Windows Server 2012
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
* Windows 7 :sup:`1`
* Windows 8.1
* Windows 10
1 - See the :ref:`Server 2008 FAQ <windows_faq_server2008>` entry for more details.
Ansible also has minimum PowerShell version requirements - please see
:ref:`windows_setup` for the latest information.
.. _windows_faq_server2008:
Are Server 2008, 2008 R2 and Windows 7 supported?
`````````````````````````````````````````````````
Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
Can I manage Windows Nano Server with Ansible?
``````````````````````````````````````````````
Ansible does not currently work with Windows Nano Server, since it does
not have access to the full .NET Framework that is used by the majority of the
modules and internal components.
.. _windows_faq_ansible:
Can Ansible run on Windows?
```````````````````````````
No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host
natively, though it can run under the Windows Subsystem for Linux (WSL).
.. note:: The Windows Subsystem for Linux is not supported by Ansible and
should not be used for production systems.
To install Ansible on WSL, the following commands
can be run in the bash terminal:
.. code-block:: shell
sudo apt-get update
sudo apt-get install python3-pip git libffi-dev libssl-dev -y
pip install --user ansible pywinrm
To run Ansible from source instead of a release on the WSL, simply uninstall the pip
installed version and then clone the git repo.
.. code-block:: shell
pip uninstall ansible -y
git clone https://github.com/ansible/ansible.git
source ansible/hacking/env-setup
# To enable Ansible on login, run the following
echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
If you encounter timeout errors when running Ansible on the WSL, this may be due to an issue
with ``sleep`` not returning correctly. The following workaround may resolve the issue:
.. code-block:: shell
mv /usr/bin/sleep /usr/bin/sleep.orig
ln -s /bin/true /usr/bin/sleep
Another option is to use WSL 2 if running Windows 10 later than build 2004.
.. code-block:: shell
wsl --set-default-version 2
Can I use SSH keys to authenticate to Windows hosts?
````````````````````````````````````````````````````
You cannot use SSH keys with the WinRM or PSRP connection plugins.
These connection plugins use X509 certificates for authentication instead
of the SSH key pairs that SSH uses.
The way X509 certificates are generated and mapped to a user is different
from the SSH implementation; consult the :ref:`windows_winrm` documentation for
more information.
Ansible 2.8 has added an experimental option to use the SSH connection plugin,
which uses SSH keys for authentication, for Windows servers. See :ref:`this question <windows_faq_ssh>`
for more information.
.. _windows_faq_winrm:
Why can I run a command locally that does not work under Ansible?
`````````````````````````````````````````````````````````````````
Ansible executes commands through WinRM. These processes are different from
running a command locally in these ways:
* Unless using an authentication option like CredSSP or Kerberos with
credential delegation, the WinRM process does not have the ability to
delegate the user's credentials to a network resource, causing ``Access is
Denied`` errors.
* All processes run under WinRM are in a non-interactive session. Applications
that require an interactive session will not work.
* When running through WinRM, Windows restricts access to internal Windows
APIs like the Windows Update API and DPAPI, which some installers and
programs rely on.
Some ways to bypass these restrictions are to:
* Use ``become``, which runs a command as it would when run locally. This will
bypass most WinRM restrictions, as Windows is unaware the process is running
under WinRM when ``become`` is used. See the :ref:`become` documentation for more
information.
* Use a scheduled task, which can be created with ``win_scheduled_task``. Like
``become``, it will bypass all WinRM restrictions, but it can only be used to run
commands, not modules.
* Use ``win_psexec`` to run a command on the host. PSExec does not use WinRM
and so will bypass any of the restrictions.
* To access network resources without any of these workarounds, you can use
CredSSP or Kerberos with credential delegation enabled.
See :ref:`become` more info on how to use become. The limitations section at
:ref:`windows_winrm` has more details around WinRM limitations.
This program won't install on Windows with Ansible
``````````````````````````````````````````````````
See :ref:`this question <windows_faq_winrm>` for more information about WinRM limitations.
What Windows modules are available?
```````````````````````````````````
Most of the Ansible modules in Ansible Core are written for a combination of
Linux/Unix machines and arbitrary web services. These modules are written in
Python and most of them do not work on Windows.
Because of this, there are dedicated Windows modules that are written in
PowerShell and are meant to be run on Windows hosts. A list of these modules
can be found :ref:`here <windows_modules>`.
In addition, the following Ansible Core modules/action-plugins work with Windows:
* add_host
* assert
* async_status
* debug
* fail
* fetch
* group_by
* include
* include_role
* include_vars
* meta
* pause
* raw
* script
* set_fact
* set_stats
* setup
* slurp
* template (also: win_template)
* wait_for_connection
Ansible Windows modules exist in the :ref:`plugins_in_ansible.windows`, :ref:`plugins_in_community.windows`, and :ref:`plugins_in_chocolatey.chocolatey` collections.
Can I run Python modules on Windows hosts?
``````````````````````````````````````````
No, the WinRM connection protocol is set to use PowerShell modules, so Python
modules will not work. A way to bypass this issue to use
``delegate_to: localhost`` to run a Python module on the Ansible controller.
This is useful if during a playbook, an external service needs to be contacted
and there is no equivalent Windows module available.
.. _windows_faq_ssh:
Can I connect to Windows hosts over SSH?
````````````````````````````````````````
Ansible 2.8 has added an experimental option to use the SSH connection plugin
to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
fork that is in development with Microsoft on
the Windows host(s). While most of the basics should work with SSH,
``Win32-OpenSSH`` is rapidly changing, with new features added and bugs
fixed in every release. It is highly recommend you `install <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ the latest release
of ``Win32-OpenSSH`` from the GitHub Releases page when using it with Ansible
on Windows hosts.
To use SSH as the connection to a Windows host, set the following variables in
the inventory:
.. code-block:: shell
ansible_connection=ssh
# Set either cmd or powershell not both
ansible_shell_type=cmd
# ansible_shell_type=powershell
The value for ``ansible_shell_type`` should either be ``cmd`` or ``powershell``.
Use ``cmd`` if the ``DefaultShell`` has not been configured on the SSH service
and ``powershell`` if that has been set as the ``DefaultShell``.
Why is connecting to a Windows host through SSH failing?
````````````````````````````````````````````````````````
Unless you are using ``Win32-OpenSSH`` as described above, you must connect to
Windows hosts using :ref:`windows_winrm`. If your Ansible output indicates that
SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
Make sure ``ansible_connection: winrm`` is set in the inventory for the Windows
host(s).
Why are my credentials being rejected?
``````````````````````````````````````
This can be due to a myriad of reasons unrelated to incorrect credentials.
See HTTP 401/Credentials Rejected at :ref:`windows_setup` for a more detailed
guide of this could mean.
Why am I getting an error SSL CERTIFICATE_VERIFY_FAILED?
````````````````````````````````````````````````````````
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that
has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to
validate the certificate WinRM is using for an HTTPS connection. If the
certificate cannot be validated (such as in the case of a self signed cert), it will
fail the verification process.
To ignore certificate validation, add
``ansible_winrm_server_cert_validation: ignore`` to inventory for the Windows
host.
.. seealso::
:ref:`windows`
The Windows documentation index
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
docs/docsite/rst/os_guide/windows_setup.rst
|
.. _windows_setup:
Setting up a Windows Host
=========================
This document discusses the setup that is required before Ansible can communicate with a Microsoft Windows host.
.. contents::
:local:
Host Requirements
`````````````````
For Ansible to communicate to a Windows host and use Windows modules, the
Windows host must meet these base requirements for connectivity:
* With Ansible you can generally manage Windows versions under the current and extended support from Microsoft. You can also manage desktop OSs including Windows 8.1, and 10, and server OSs including Windows Server 2012, 2012 R2, 2016, 2019, and 2022.
* You need to install PowerShell 3.0 or newer and at least .NET 4.0 on the Windows host.
* You need to create and activate a WinRM listener. More details, see `WinRM Setup <https://docs.ansible.com/ansible/latest//user_guide/windows_setup.html#winrm-listener>`_.
.. Note:: Some Ansible modules have additional requirements, such as a newer OS or PowerShell version. Consult the module documentation page to determine whether a host meets those requirements.
Upgrading PowerShell and .NET Framework
---------------------------------------
Ansible requires PowerShell version 3.0 and .NET Framework 4.0 or newer to function on older operating systems like Server 2008 and Windows 7. The base image does not meet this
requirement. You can use the `Upgrade-PowerShell.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Upgrade-PowerShell.ps1>`_ script to update these.
This is an example of how to run this script from PowerShell:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Upgrade-PowerShell.ps1"
$file = "$env:temp\Upgrade-PowerShell.ps1"
$username = "Administrator"
$password = "Password"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Force
&$file -Version 5.1 -Username $username -Password $password -Verbose
In the script, the ``file`` value can be the PowerShell version 3.0, 4.0, or 5.1.
Once completed, you need to run the following PowerShell commands:
1. As an optional but good security practice, you can set the execution policy back to the default.
.. code-block:: powershell
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force
Use the ``RemoteSigned`` value for Windows servers, or ``Restricted`` for Windows clients.
2. Remove the auto logon.
.. code-block:: powershell
$reg_winlogon_path = "HKLM:\Software\Microsoft\Windows NT\CurrentVersion\Winlogon"
Set-ItemProperty -Path $reg_winlogon_path -Name AutoAdminLogon -Value 0
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultUserName -ErrorAction SilentlyContinue
Remove-ItemProperty -Path $reg_winlogon_path -Name DefaultPassword -ErrorAction SilentlyContinue
The script determines what programs you need to install (such as .NET Framework 4.5.2) and what PowerShell version needs to be present. If a reboot is needed and the ``username`` and ``password`` parameters are set, the script will automatically reboot the machine and then logon. If the ``username`` and ``password`` parameters are not set, the script will prompt the user to manually reboot and logon when required. When the user is next logged in, the script will continue where it left off and the process continues until no more
actions are required.
.. Note:: If you run the script on Server 2008, then you need to install SP2. For Server 2008 R2 or Windows 7 you need SP1.
On Windows Server 2008 you can install only PowerShell 3.0. A newer version will result in the script failure.
The ``username`` and ``password`` parameters are stored in plain text in the registry. Run the cleanup commands after the script finishes to ensure no credentials are stored on the host.
WinRM Memory Hotfix
-------------------
On PowerShell v3.0, there is a bug that limits the amount of memory available to the WinRM service. Use the `Install-WMF3Hotfix.ps1 <https://github.com/jborean93/ansible-windows/blob/master/scripts/Install-WMF3Hotfix.ps1>`_ script to install a hotfix on affected hosts as part of the system bootstrapping or imaging process. Without this hotfix, Ansible fails to execute certain commands on the Windows host.
To install the hotfix:
.. code-block:: powershell
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$url = "https://raw.githubusercontent.com/jborean93/ansible-windows/master/scripts/Install-WMF3Hotfix.ps1"
$file = "$env:temp\Install-WMF3Hotfix.ps1"
(New-Object -TypeName System.Net.WebClient).DownloadFile($url, $file)
powershell.exe -ExecutionPolicy ByPass -File $file -Verbose
For more details, refer to the `"Out of memory" error on a computer that has a customized MaxMemoryPerShellMB quota set and has WMF 3.0 installed <https://support.microsoft.com/en-us/help/2842230/out-of-memory-error-on-a-computer-that-has-a-customized-maxmemorypersh>`_ article.
WinRM Setup
```````````
You need to configure the WinRM service so that Ansible can connect to it. There are two main components of the WinRM service that governs how Ansible can interface with the Windows host: the ``listener`` and the ``service`` configuration settings.
WinRM Listener
--------------
The WinRM services listen for requests on one or more ports. Each of these ports must have a listener created and configured.
To view the current listeners that are running on the WinRM service:
.. code-block:: powershell
winrm enumerate winrm/config/Listener
This will output something like:
.. code-block:: powershell
Listener
Address = *
Transport = HTTP
Port = 5985
Hostname
Enabled = true
URLPrefix = wsman
CertificateThumbprint
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
Listener
Address = *
Transport = HTTPS
Port = 5986
Hostname = SERVER2016
Enabled = true
URLPrefix = wsman
CertificateThumbprint = E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE
ListeningOn = 10.0.2.15, 127.0.0.1, 192.168.56.155, ::1, fe80::5efe:10.0.2.15%6, fe80::5efe:192.168.56.155%8, fe80::
ffff:ffff:fffe%2, fe80::203d:7d97:c2ed:ec78%3, fe80::e8ea:d765:2c69:7756%7
In the example above there are two listeners activated. One is listening on port 5985 over HTTP and the other is listening on port 5986 over HTTPS. Some of the key options that are useful to understand are:
* ``Transport``: Whether the listener is run over HTTP or HTTPS. We recommend you use a listener over HTTPS because the data is encrypted without any further changes required.
* ``Port``: The port the listener runs on. By default it is ``5985`` for HTTP and ``5986`` for HTTPS. This port can be changed to whatever is required and corresponds to the host var ``ansible_port``.
* ``URLPrefix``: The URL prefix to listen on. By default it is ``wsman``. If you change this option, you need to set the host var ``ansible_winrm_path`` to the same value.
* ``CertificateThumbprint``: If you use an HTTPS listener, this is the thumbprint of the certificate in the Windows Certificate Store that is used in the connection. To get the details of the certificate itself, run this command with the relevant certificate thumbprint in PowerShell:
.. code-block:: powershell
$thumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
Get-ChildItem -Path cert:\LocalMachine\My -Recurse | Where-Object { $_.Thumbprint -eq $thumbprint } | Select-Object *
Setup WinRM Listener
++++++++++++++++++++
There are three ways to set up a WinRM listener:
* Using ``winrm quickconfig`` for HTTP or ``winrm quickconfig -transport:https`` for HTTPS. This is the easiest option to use when running outside of a domain environment and a simple listener is required. Unlike the other options, this process also has the added benefit of opening up the firewall for the ports required and starts the WinRM service.
* Using Group Policy Objects (GPO). This is the best way to create a listener when the host is a member of a domain because the configuration is done automatically without any user input. For more information on group policy objects, see the `Group Policy Objects documentation <https://msdn.microsoft.com/en-us/library/aa374162(v=vs.85).aspx>`_.
* Using PowerShell to create a listener with a specific configuration. This can be done by running the following PowerShell commands:
.. code-block:: powershell
$selector_set = @{
Address = "*"
Transport = "HTTPS"
}
$value_set = @{
CertificateThumbprint = "E6CDAA82EEAF2ECE8546E05DB7F3E01AA47D76CE"
}
New-WSManInstance -ResourceURI "winrm/config/Listener" -SelectorSet $selector_set -ValueSet $value_set
To see the other options with this PowerShell command, refer to the
`New-WSManInstance <https://docs.microsoft.com/en-us/powershell/module/microsoft.wsman.management/new-wsmaninstance?view=powershell-5.1>`_ documentation.
.. Note:: When creating an HTTPS listener, you must create and store a certificate in the ``LocalMachine\My`` certificate store.
Delete WinRM Listener
+++++++++++++++++++++
* To remove all WinRM listeners:
.. code-block:: powershell
Remove-Item -Path WSMan:\localhost\Listener\* -Recurse -Force
* To remove only those listeners that run over HTTPS:
.. code-block:: powershell
Get-ChildItem -Path WSMan:\localhost\Listener | Where-Object { $_.Keys -contains "Transport=HTTPS" } | Remove-Item -Recurse -Force
.. Note:: The ``Keys`` object is an array of strings, so it can contain different values. By default, it contains a key for ``Transport=`` and ``Address=`` which correspond to the values from the ``winrm enumerate winrm/config/Listeners`` command.
WinRM Service Options
---------------------
You can control the behavior of the WinRM service component, including authentication options and memory settings.
To get an output of the current service configuration options, run the following command:
.. code-block:: powershell
winrm get winrm/config/Service
winrm get winrm/config/Winrs
This will output something like:
.. code-block:: powershell
Service
RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 1500
EnumerationTimeoutms = 240000
MaxConnections = 300
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = false
Auth
Basic = true
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
AllowRemoteAccess = true
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 7200000
MaxConcurrentUsers = 2147483647
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 2147483647
MaxMemoryPerShellMB = 2147483647
MaxShellsPerUser = 2147483647
You do not need to change the majority of these options. However, some of the important ones to know about are:
* ``Service\AllowUnencrypted`` - specifies whether WinRM will allow HTTP traffic without message encryption. Message level encryption is only possible when the ``ansible_winrm_transport`` variable is ``ntlm``, ``kerberos`` or ``credssp``. By default, this is ``false`` and you should only set it to ``true`` when debugging WinRM messages.
* ``Service\Auth\*`` - defines what authentication options you can use with the WinRM service. By default, ``Negotiate (NTLM)`` and ``Kerberos`` are enabled.
* ``Service\Auth\CbtHardeningLevel`` - specifies whether channel binding tokens are not verified (None), verified but not required (Relaxed), or verified and required (Strict). CBT is only used when connecting with NT LAN Manager (NTLM) or Kerberos over HTTPS.
* ``Service\CertificateThumbprint`` - thumbprint of the certificate for encrypting the TLS channel used with CredSSP authentication. By default, this is empty. A self-signed certificate is generated when the WinRM service starts and is used in the TLS process.
* ``Winrs\MaxShellRunTime`` - maximum time, in milliseconds, that a remote command is allowed to execute.
* ``Winrs\MaxMemoryPerShellMB`` - maximum amount of memory allocated per shell, including its child processes.
To modify a setting under the ``Service`` key in PowerShell, you need to provide a path to the option after ``winrm/config/Service``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\{path} -Value {some_value}
For example, to change ``Service\Auth\CbtHardeningLevel``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\CbtHardeningLevel -Value Strict
To modify a setting under the ``Winrs`` key in PowerShell, you need to provide a path to the option after ``winrm/config/Winrs``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Shell\{path} -Value {some_value}
For example, to change ``Winrs\MaxShellRunTime``:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Shell\MaxShellRunTime -Value 2147483647
.. Note:: If you run the command in a domain environment, some of these options are set by
GPO and cannot be changed on the host itself. When you configured a key with GPO, it contains the text ``[Source="GPO"]`` next to the value.
Common WinRM Issues
-------------------
WinRM has a wide range of configuration options, which makes its configuration complex. As a result, errors that Ansible displays could in fact be problems with the host setup instead.
To identify a host issue, run the following command from another Windows host to connect to the target Windows host.
* To test HTTP:
.. code-block:: powershell
winrs -r:http://server:5985/wsman -u:Username -p:Password ipconfig
* To test HTTPS:
.. code-block:: powershell
winrs -r:https://server:5986/wsman -u:Username -p:Password -ssl ipconfig
The command will fail if the certificate is not verifiable.
* To test HTTPS ignoring certificate verification:
.. code-block:: powershell
$username = "Username"
$password = ConvertTo-SecureString -String "Password" -AsPlainText -Force
$cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
$session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck
Invoke-Command -ComputerName server -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option
If any of the above commands fail, the issue is probably related to the WinRM setup.
HTTP 401/Credentials Rejected
+++++++++++++++++++++++++++++
An HTTP 401 error indicates the authentication process failed during the initial
connection. You can check the following to troubleshoot:
* The credentials are correct and set properly in your inventory with the ``ansible_user`` and ``ansible_password`` variables.
* The user is a member of the local Administrators group, or has been explicitly granted access. You can perform a connection test with the ``winrs`` command to rule this out.
* The authentication option set by the ``ansible_winrm_transport`` variable is enabled under ``Service\Auth\*``.
* If running over HTTP and not HTTPS, use ``ntlm``, ``kerberos`` or ``credssp`` with the ``ansible_winrm_message_encryption: auto`` custom inventory variable to enable message encryption. If you use another authentication option, or if it is not possible to upgrade the installed ``pywinrm`` package, you can set ``Service\AllowUnencrypted`` to ``true``. This is recommended only for troubleshooting.
* The downstream packages ``pywinrm``, ``requests-ntlm``, ``requests-kerberos``, and/or ``requests-credssp`` are up to date using ``pip``.
* For Kerberos authentication, ensure that ``Service\Auth\CbtHardeningLevel`` is not set to ``Strict``.
* For Basic or Certificate authentication, make sure that the user is a local account. Domain accounts do not work with Basic and Certificate authentication.
HTTP 500 Error
++++++++++++++
An HTTP 500 error indicates a problem with the WinRM service. You can check the following to troubleshoot:
* The number of your currently open shells has not exceeded either ``WinRsMaxShellsPerUser``. Alternatively, you did not exceed any of the other Winrs quotas.
Timeout Errors
+++++++++++++++
Sometimes Ansible is unable to reach the host. These instances usually indicate a problem with the network connection. You can check the following to troubleshoot:
* The firewall is not set to block the configured WinRM listener ports.
* A WinRM listener is enabled on the port and path set by the host vars.
* The ``winrm`` service is running on the Windows host and is configured for the automatic start.
Connection Refused Errors
+++++++++++++++++++++++++
When you communicate with the WinRM service on the host you can encounter some problems. Check the following to help the troubleshooting:
* The WinRM service is up and running on the host. Use the ``(Get-Service -Name winrm).Status`` command to get the status of the service.
* The host firewall is allowing traffic over the WinRM port. By default this is ``5985`` for HTTP and ``5986`` for HTTPS.
Sometimes an installer may restart the WinRM or HTTP service and cause this error. The best way to deal with this is to use the ``win_psexec`` module from another Windows host.
Failure to Load Builtin Modules
+++++++++++++++++++++++++++++++
Sometimes PowerShell fails with an error message similar to:
.. code-block:: powershell
The 'Out-String' command was found in the module 'Microsoft.PowerShell.Utility', but the module could not be loaded.
In that case, there could be a problem when trying to access all the paths specified by the ``PSModulePath`` environment variable.
A common cause of this issue is that ``PSModulePath`` contains a Universal Naming Convention (UNC) path to a file share. Additionally, the double hop/credential delegation issue causes that the Ansible process cannot access these folders. To work around this problem is to either:
* Remove the UNC path from ``PSModulePath``.
or
* Use an authentication option that supports credential delegation like ``credssp`` or ``kerberos``. You need to have the credential delegation enabled.
See `KB4076842 <https://support.microsoft.com/en-us/help/4076842>`_ for more information on this problem.
Windows SSH Setup
`````````````````
Ansible 2.8 has added an experimental SSH connection for Windows managed nodes.
.. warning::
Use this feature at your own risk! Using SSH with Windows is experimental. This implementation may make
backwards incompatible changes in future releases. The server-side components can be unreliable depending on your installed version.
Installing OpenSSH using Windows Settings
-----------------------------------------
You can use OpenSSH to connect Window 10 clients to Windows Server 2019. OpenSSH Client is available to install on Windows 10 build 1809 and later. OpenSSH Server is available to install on Windows Server 2019 and later.
For more information, refer to `Get started with OpenSSH for Windows <https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse>`_.
Installing Win32-OpenSSH
------------------------
To install the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_ service for use with
Ansible, select one of these installation options:
* Manually install ``Win32-OpenSSH``, following the `install instructions <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ from Microsoft.
* Use Chocolatey:
.. code-block:: powershell
choco install --package-parameters=/SSHServerFeature openssh
* Use the ``win_chocolatey`` Ansible module:
.. code-block:: yaml
- name: install the Win32-OpenSSH service
win_chocolatey:
name: openssh
package_params: /SSHServerFeature
state: present
* Install an Ansible Galaxy role for example `jborean93.win_openssh <https://galaxy.ansible.com/jborean93/win_openssh>`_:
.. code-block:: powershell
ansible-galaxy install jborean93.win_openssh
* Use the role in your playbook:
.. code-block:: yaml
- name: install Win32-OpenSSH service
hosts: windows
gather_facts: false
roles:
- role: jborean93.win_openssh
opt_openssh_setup_service: True
.. note:: ``Win32-OpenSSH`` is still a beta product and is constantly being updated to include new features and bugfixes. If you use SSH as a connection option for Windows, we highly recommend you install the latest version.
Configuring the Win32-OpenSSH shell
-----------------------------------
By default ``Win32-OpenSSH`` uses ``cmd.exe`` as a shell.
* To configure a different shell, use an Ansible playbook with a task to define the registry setting:
.. code-block:: yaml
- name: set the default shell to PowerShell
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
data: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
type: string
state: present
* To revert the settings back to the default shell:
.. code-block:: yaml
- name: set the default shell to cmd
win_regedit:
path: HKLM:\SOFTWARE\OpenSSH
name: DefaultShell
state: absent
Win32-OpenSSH Authentication
----------------------------
Win32-OpenSSH authentication with Windows is similar to SSH authentication on Unix/Linux hosts. You can use a plaintext password or SSH public key authentication.
For the key-based authentication:
* Add your public keys to an ``authorized_key`` file in the ``.ssh`` folder of the user's profile directory.
* Configure the SSH service using the ``sshd_config`` file.
When using SSH key authentication with Ansible, the remote session will not have access to user credentials and will fail when attempting to access a network resource. This is also known as the double-hop or credential delegation issue. To work around this problem:
* Use plaintext password authentication by setting the ``ansible_password`` variable.
* Use the ``become`` directive on the task with the credentials of the user that needs access to the remote resource.
Configuring Ansible for SSH on Windows
--------------------------------------
To configure Ansible to use SSH for Windows hosts, you must set two connection variables:
* set ``ansible_connection`` to ``ssh``
* set ``ansible_shell_type`` to ``cmd`` or ``powershell``
The ``ansible_shell_type`` variable should reflect the ``DefaultShell`` configured on the Windows host. Set ``ansible_shell_type`` to ``cmd`` for the default shell. Alternatively, set ``ansible_shell_type`` to ``powershell`` if you changed ``DefaultShell`` to PowerShell.
Known issues with SSH on Windows
--------------------------------
Using SSH with Windows is experimental. Currently existing issues are:
* Win32-OpenSSH versions older than ``v7.9.0.0p1-Beta`` do not work when ``powershell`` is the shell type.
* While Secure Copy Protocol (SCP) should work, SSH File Transfer Protocol (SFTP) is the recommended mechanism to use when copying or fetching a file.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,421 |
Remove Windows 2012 from ansible-test
|
### Summary
Windows Server 2012 reaches [end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10th. This is a remote VM removal. Removal can be done before end of support, after a 2-week notification period.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80421
|
https://github.com/ansible/ansible/pull/80778
|
0df794e5a4fe4597ee65b0d492fbf0d0989d5ca0
|
0a36cd910e4cdb2a3a0a40488596b69789ffdbe2
| 2023-04-05T21:27:57Z |
python
| 2023-05-18T18:02:58Z |
test/lib/ansible_test/_data/completion/windows.txt
|
windows/2012 provider=aws arch=x86_64
windows/2012-R2 provider=aws arch=x86_64
windows/2016 provider=aws arch=x86_64
windows/2019 provider=aws arch=x86_64
windows/2022 provider=aws arch=x86_64
windows provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,709 |
JSON content type detection for uri, no longer working for application/ld+json
|
### Summary
It looks like the PR #79719 broke the support for the automatic json conversion for the `application/ld+json` and `vnd.api+json` (JSON:API spec) response header.
The earlier check: `any(candidate in sub_type for candidate in JSON_CANDIDATES)`, would return `true` for sub_type `ld+json`. However after the change in this commit, `sub_type.lower() in JSON_CANDIDATES` would return `false` for sub_type `ld+json`.
This causes the automatic loading into a key called json in the dictionary results, to not happen anymore, which of course resulted into an error in our ansible playbook: `'dict object' has no attribute 'json'`
Since `application/ld+json` seems to be a globally acknowledged content type, I would assume this should keep working.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Windows / WSL2 / Ubuntu 20.04.5 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "Perform task"
ansible.builtin.uri:
url: "http://some_url_with_json_api_spec_formatted_response"
method: "POST"
body: "{{ lookup('template', './templates/some_template') }}"
body_format: json
status_code: 201
headers:
Content-Type: "application/vnd.api+json"
register: json_api_spec_output
- name: "Record id"
set_fact:
response_id: "{{ json_api_spec_output.json.data.id }}"
```
### Expected Results
The id is saved into the response_id variable.
### Actual Results
```console
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'json'.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80709
|
https://github.com/ansible/ansible/pull/80745
|
47539a19ea9bcc573424c01336acf8b247d10d10
|
0c7361d9acf7c8966a09f67de2a8679ef86fd856
| 2023-05-03T13:07:37Z |
python
| 2023-05-23T14:38:05Z |
changelogs/fragments/update-maybe-json-uri.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,709 |
JSON content type detection for uri, no longer working for application/ld+json
|
### Summary
It looks like the PR #79719 broke the support for the automatic json conversion for the `application/ld+json` and `vnd.api+json` (JSON:API spec) response header.
The earlier check: `any(candidate in sub_type for candidate in JSON_CANDIDATES)`, would return `true` for sub_type `ld+json`. However after the change in this commit, `sub_type.lower() in JSON_CANDIDATES` would return `false` for sub_type `ld+json`.
This causes the automatic loading into a key called json in the dictionary results, to not happen anymore, which of course resulted into an error in our ansible playbook: `'dict object' has no attribute 'json'`
Since `application/ld+json` seems to be a globally acknowledged content type, I would assume this should keep working.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Windows / WSL2 / Ubuntu 20.04.5 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "Perform task"
ansible.builtin.uri:
url: "http://some_url_with_json_api_spec_formatted_response"
method: "POST"
body: "{{ lookup('template', './templates/some_template') }}"
body_format: json
status_code: 201
headers:
Content-Type: "application/vnd.api+json"
register: json_api_spec_output
- name: "Record id"
set_fact:
response_id: "{{ json_api_spec_output.json.data.id }}"
```
### Expected Results
The id is saved into the response_id variable.
### Actual Results
```console
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'json'.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80709
|
https://github.com/ansible/ansible/pull/80745
|
47539a19ea9bcc573424c01336acf8b247d10d10
|
0c7361d9acf7c8966a09f67de2a8679ef86fd856
| 2023-05-03T13:07:37Z |
python
| 2023-05-23T14:38:05Z |
lib/ansible/modules/uri.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
version_added: "1.1"
options:
ciphers:
description:
- SSL/TLS Ciphers to use for the request.
- 'When a list is provided, all ciphers are joined in order with C(:)'
- See the L(OpenSSL Cipher List Format,https://www.openssl.org/docs/manmaster/man1/openssl-ciphers.html#CIPHER-LIST-FORMAT)
for more details.
- The available ciphers is dependent on the Python and OpenSSL/LibreSSL versions
type: list
elements: str
version_added: '2.14'
decompress:
description:
- Whether to attempt to decompress gzip content-encoded responses
type: bool
default: true
version_added: '2.14'
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
type: str
required: true
dest:
description:
- A path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
type: path
url_username:
description:
- A username for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ user ]
url_password:
description:
- A password for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ password ]
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON.
- If C(body_format) is set to 'form-urlencoded' it will convert a dictionary
or list of tuples into an 'application/x-www-form-urlencoded' string. (Added in v2.7)
- If C(body_format) is set to 'form-multipart' it will convert a dictionary
into 'multipart/form-multipart' body. (Added in v2.10)
type: raw
body_format:
description:
- The serialization format of the body. When set to C(json), C(form-multipart), or C(form-urlencoded), encodes
the body argument, if needed, and automatically sets the Content-Type header accordingly.
- As of v2.3 it is possible to override the C(Content-Type) header, when
set to C(json) or C(form-urlencoded) via the I(headers) option.
- The 'Content-Type' header cannot be overridden when using C(form-multipart)
- C(form-urlencoded) was added in v2.7.
- C(form-multipart) was added in v2.10.
type: str
choices: [ form-urlencoded, json, raw, form-multipart ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response.
- In more recent versions we do not restrict the method at the module level anymore
but it still must be a valid method accepted by the service handling the request.
type: str
default: GET
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result no matter it succeeded or failed.
- Independently of this option, if the reported Content-type is "application/json", then the JSON is
always loaded into a key called C(json) in the dictionary results.
type: bool
default: no
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- When this setting is C(false), this module will first try an unauthenticated request, and when the server replies
with an C(HTTP 401) error, it will submit the Basic authentication header.
- When this setting is C(true), this module will immediately send a Basic authentication header on the first
request.
- "Use this setting in any of the following scenarios:"
- You know the webservice endpoint always requires HTTP Basic authentication, and you want to speed up your
requests by eliminating the first roundtrip.
- The web service does not properly send an HTTP 401 error to your client, so Ansible's HTTP library will not
properly respond with HTTP credentials, and logins will fail.
- The webservice bans or rate-limits clients that cause any HTTP 401 errors.
type: bool
default: no
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(true) and C(false) choices are accepted for backwards compatibility,
where C(true) is the equivalent of C(all) and C(false) is the equivalent of C(safe). C(true) and C(false)
are deprecated and will be removed in some future version of Ansible.
type: str
choices: ['all', 'no', 'none', 'safe', 'urllib2', 'yes']
default: safe
creates:
description:
- A filename, when it already exists, this step will not be run.
type: path
removes:
description:
- A filename, when it does not exist, this step will not be run.
type: path
status_code:
description:
- A list of valid, numeric, HTTP status codes that signifies success of the request.
type: list
elements: int
default: [ 200 ]
timeout:
description:
- The socket level timeout in seconds
type: int
default: 30
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash. As
of C(2.3) supplying C(Content-Type) here will override the header
generated by supplying C(json) or C(form-urlencoded) for I(body_format).
type: dict
default: {}
version_added: '2.1'
validate_certs:
description:
- If C(false), SSL certificates will not be validated.
- This should only set to C(false) used on personally controlled sites using self-signed certificates.
- Prior to 1.9.2 the code defaulted to C(false).
type: bool
default: true
version_added: '1.9.2'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, I(client_key) is not required
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If I(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
ca_path:
description:
- PEM formatted file that contains a CA certificate to be used for validation
type: path
version_added: '2.11'
src:
description:
- Path to file to be submitted to the remote server.
- Cannot be used with I(body).
- Should be used with I(force_basic_auth) to ensure success when the remote end sends a 401.
type: path
version_added: '2.7'
remote_src:
description:
- If C(false), the module will search for the C(src) on the controller node.
- If C(true), the module will search for the C(src) on the managed (remote) node.
type: bool
default: no
version_added: '2.7'
force:
description:
- If C(true) do not get a cached copy.
type: bool
default: no
use_proxy:
description:
- If C(false), it will not use a proxy, even if one is defined in an environment variable on the target hosts.
type: bool
default: true
unix_socket:
description:
- Path to Unix domain socket to use for connection
type: path
version_added: '2.8'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
unredirected_headers:
description:
- A list of header names that will not be sent on subsequent redirected requests. This list is case
insensitive. By default all headers will be redirected. In some cases it may be beneficial to list
headers such as C(Authorization) here to avoid potential credential exposure.
default: []
type: list
elements: str
version_added: '2.12'
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
use_netrc:
description:
- Determining whether to use credentials from ``~/.netrc`` file
- By default .netrc is used with Basic authentication headers
- When set to False, .netrc credentials are ignored
type: bool
default: true
version_added: '2.14'
extends_documentation_fragment:
- action_common_attributes
- files
attributes:
check_mode:
support: none
diff_mode:
support: none
platform:
platforms: posix
notes:
- The dependency on httplib2 was removed in Ansible 2.1.
- The module returns all the HTTP headers in lower-case.
- For Windows targets, use the M(ansible.windows.win_uri) module instead.
seealso:
- module: ansible.builtin.get_url
- module: ansible.windows.win_uri
author:
- Romeo Theriault (@romeotheriault)
'''
EXAMPLES = r'''
- name: Check that you can connect (GET) to a page and it returns a status 200
ansible.builtin.uri:
url: http://www.example.com
- name: Check that a page returns successfully but fail if the word AWESOME is not in the page contents
ansible.builtin.uri:
url: http://www.example.com
return_content: true
register: this
failed_when: this is failed or "'AWESOME' not in this.content"
- name: Create a JIRA issue
ansible.builtin.uri:
url: https://your.jira.example.com/rest/api/2/issue/
user: your_username
password: your_pass
method: POST
body: "{{ lookup('ansible.builtin.file','issue.json') }}"
force_basic_auth: true
status_code: 201
body_format: json
- name: Login to a form based webpage, then use the returned cookie to access the app in later tasks
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
name: your_username
password: your_password
enter: Sign in
status_code: 302
register: login
- name: Login to a form based webpage using a list of tuples
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
- [ name, your_username ]
- [ password, your_password ]
- [ enter, Sign in ]
status_code: 302
register: login
- name: Upload a file via multipart/form-multipart
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
body_format: form-multipart
body:
file1:
filename: /bin/true
mime_type: application/octet-stream
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field: value
- name: Connect to website using a previously stored cookie
ansible.builtin.uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: true
headers:
Cookie: "{{ login.cookies_string }}"
- name: Queue build of a project in Jenkins
ansible.builtin.uri:
url: http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
method: GET
force_basic_auth: true
status_code: 201
- name: POST from contents of local file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: file.json
- name: POST from contents of remote file
ansible.builtin.uri:
url: https://httpbin.org/post
method: POST
src: /path/to/my/file.json
remote_src: true
- name: Create workspaces in Log analytics Azure
ansible.builtin.uri:
url: https://www.mms.microsoft.com/Embedded/Api/ConfigDataSources/LogManagementData/Save
method: POST
body_format: json
status_code: [200, 202]
return_content: true
headers:
Content-Type: application/json
x-ms-client-workspace-path: /subscriptions/{{ sub_id }}/resourcegroups/{{ res_group }}/providers/microsoft.operationalinsights/workspaces/{{ w_spaces }}
x-ms-client-platform: ibiza
x-ms-client-auth-token: "{{ token_az }}"
body:
- name: Pause play until a URL is reachable from this host
ansible.builtin.uri:
url: "http://192.0.2.1/some/test"
follow_redirects: none
method: GET
register: _result
until: _result.status == 200
retries: 720 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
- name: Provide SSL/TLS ciphers as a list
uri:
url: https://example.org
ciphers:
- '@SECLEVEL=2'
- ECDH+AESGCM
- ECDH+CHACHA20
- ECDH+AES
- DHE+AES
- '!aNULL'
- '!eNULL'
- '!aDSS'
- '!SHA1'
- '!AESCCM'
- name: Provide SSL/TLS ciphers as an OpenSSL formatted cipher list
uri:
url: https://example.org
ciphers: '@SECLEVEL=2:ECDH+AESGCM:ECDH+CHACHA20:ECDH+AES:DHE+AES:!aNULL:!eNULL:!aDSS:!SHA1:!AESCCM'
'''
RETURN = r'''
# The return information includes all the HTTP headers in lower-case.
content:
description: The response body content.
returned: status not in status_code or return_content is true
type: str
sample: "{}"
cookies:
description: The cookie values placed in cookie jar.
returned: on success
type: dict
sample: {"SESSIONID": "[SESSIONID]"}
version_added: "2.4"
cookies_string:
description: The value for future request Cookie headers.
returned: on success
type: str
sample: "SESSIONID=[SESSIONID]"
version_added: "2.6"
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: on success
type: int
sample: 23
msg:
description: The HTTP message from the request.
returned: always
type: str
sample: OK (unknown bytes)
path:
description: destination file/path
returned: dest is defined
type: str
sample: /path/to/file.txt
redirected:
description: Whether the request was redirected.
returned: on success
type: bool
sample: false
status:
description: The HTTP status code from the request.
returned: always
type: int
sample: 200
url:
description: The actual URL used for the request.
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import json
import os
import re
import shutil
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule, sanitize_keys
from ansible.module_utils.six import PY2, PY3, binary_type, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.module_utils.six.moves.collections_abc import Mapping, Sequence
from ansible.module_utils.urls import fetch_url, get_response_filename, parse_content_type, prepare_multipart, url_argument_spec
JSON_CANDIDATES = {'json', 'javascript'}
# List of response key names we do not want sanitize_keys() to change.
NO_MODIFY_KEYS = frozenset(
('msg', 'exception', 'warnings', 'deprecations', 'failed', 'skipped',
'changed', 'rc', 'stdout', 'stderr', 'elapsed', 'path', 'location',
'content_type')
)
def format_message(err, resp):
msg = resp.pop('msg')
return err + (' %s' % msg if msg else '')
def write_file(module, dest, content, resp):
"""
Create temp file and write content to dest file only if content changed
"""
tmpsrc = None
try:
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(fd, 'wb') as f:
if isinstance(content, binary_type):
f.write(content)
else:
shutil.copyfileobj(content, f)
except Exception as e:
if tmpsrc and os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("Failed to create temporary content file: %s" % to_native(e), resp)
module.fail_json(msg=msg, **resp)
checksum_src = module.sha1(tmpsrc)
checksum_dest = module.sha1(dest)
if checksum_src != checksum_dest:
try:
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), resp)
module.fail_json(msg=msg, **resp)
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def kv_list(data):
''' Convert data into a list of key-value tuples '''
if data is None:
return None
if isinstance(data, Sequence):
return list(data)
if isinstance(data, Mapping):
return list(data.items())
raise TypeError('cannot form-urlencode body, expect list or dict')
def form_urlencoded(body):
''' Convert data into a form-urlencoded string '''
if isinstance(body, string_types):
return body
if isinstance(body, (Mapping, Sequence)):
result = []
# Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)):
values = [values]
for value in values:
if value is not None:
result.append((to_text(key), to_text(value)))
return urlencode(result, doseq=True)
return body
def uri(module, url, dest, body, body_format, method, headers, socket_timeout, ca_path, unredirected_headers, decompress,
ciphers, use_netrc):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
src = module.params['src']
if src:
try:
headers.update({
'Content-Length': os.stat(src).st_size
})
data = open(src, 'rb')
except OSError:
module.fail_json(msg='Unable to open source file %s' % src, elapsed=0)
else:
data = body
kwargs = {}
if dest is not None and os.path.isfile(dest):
# if destination file already exist, only download if file newer
kwargs['last_mod_time'] = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
resp, info = fetch_url(module, url, data=data, headers=headers,
method=method, timeout=socket_timeout, unix_socket=module.params['unix_socket'],
ca_path=ca_path, unredirected_headers=unredirected_headers,
use_proxy=module.params['use_proxy'], decompress=decompress,
ciphers=ciphers, use_netrc=use_netrc, **kwargs)
if src:
# Try to close the open file handle
try:
data.close()
except Exception:
pass
return resp, info
def main():
argument_spec = url_argument_spec()
argument_spec.update(
dest=dict(type='path'),
url_username=dict(type='str', aliases=['user']),
url_password=dict(type='str', aliases=['password'], no_log=True),
body=dict(type='raw'),
body_format=dict(type='str', default='raw', choices=['form-urlencoded', 'json', 'raw', 'form-multipart']),
src=dict(type='path'),
method=dict(type='str', default='GET'),
return_content=dict(type='bool', default=False),
follow_redirects=dict(type='str', default='safe', choices=['all', 'no', 'none', 'safe', 'urllib2', 'yes']),
creates=dict(type='path'),
removes=dict(type='path'),
status_code=dict(type='list', elements='int', default=[200]),
timeout=dict(type='int', default=30),
headers=dict(type='dict', default={}),
unix_socket=dict(type='path'),
remote_src=dict(type='bool', default=False),
ca_path=dict(type='path', default=None),
unredirected_headers=dict(type='list', elements='str', default=[]),
decompress=dict(type='bool', default=True),
ciphers=dict(type='list', elements='str'),
use_netrc=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
mutually_exclusive=[['body', 'src']],
)
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method'].upper()
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
ca_path = module.params['ca_path']
dict_headers = module.params['headers']
unredirected_headers = module.params['unredirected_headers']
decompress = module.params['decompress']
ciphers = module.params['ciphers']
use_netrc = module.params['use_netrc']
if not re.match('^[A-Z]+$', method):
module.fail_json(msg="Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types):
body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded':
if not isinstance(body, string_types):
try:
body = form_urlencoded(body)
except ValueError as e:
module.fail_json(msg='failed to parse body as form_urlencoded: %s' % to_native(e), elapsed=0)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/x-www-form-urlencoded'
elif body_format == 'form-multipart':
try:
content_type, body = prepare_multipart(body)
except (TypeError, ValueError) as e:
module.fail_json(msg='failed to parse body as form-multipart: %s' % to_native(e))
dict_headers['Content-Type'] = content_type
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since '%s' exists" % creates, changed=False)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since '%s' does not exist" % removes, changed=False)
# Make the request
start = datetime.datetime.utcnow()
r, info = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout, ca_path, unredirected_headers,
decompress, ciphers, use_netrc)
elapsed = (datetime.datetime.utcnow() - start).seconds
if r and dest is not None and os.path.isdir(dest):
filename = get_response_filename(r) or 'index.html'
dest = os.path.join(dest, filename)
if r and r.fp is not None:
# r may be None for some errors
# r.fp may be None depending on the error, which means there are no headers either
content_type, main_type, sub_type, content_encoding = parse_content_type(r)
else:
content_type = 'application/octet-stream'
main_type = 'application'
sub_type = 'octet-stream'
content_encoding = 'utf-8'
maybe_json = content_type and sub_type.lower() in JSON_CANDIDATES
maybe_output = maybe_json or return_content or info['status'] not in status_code
if maybe_output:
try:
if PY3 and (r.fp is None or r.closed):
raise TypeError
content = r.read()
except (AttributeError, TypeError):
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', b'')
elif r:
content = r
else:
content = None
resp = {}
resp['redirected'] = info['url'] != url
resp.update(info)
resp['elapsed'] = elapsed
resp['status'] = int(resp['status'])
resp['changed'] = False
# Write the file out if requested
if r and dest is not None:
if resp['status'] in status_code and resp['status'] != 304:
write_file(module, dest, content, resp)
# allow file attribute changes
resp['changed'] = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params, path=dest)
resp['changed'] = module.set_fs_attributes_if_different(file_args, resp['changed'])
resp['path'] = dest
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
# Default content_encoding to try
if isinstance(content, binary_type):
u_content = to_text(content, encoding=content_encoding)
if maybe_json:
try:
js = json.loads(u_content)
uresp['json'] = js
except Exception:
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = None
if module.no_log_values:
uresp = sanitize_keys(uresp, module.no_log_values, NO_MODIFY_KEYS)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was %s and not %s: %s' % (resp['status'], status_code, uresp.get('msg', ''))
if return_content:
module.fail_json(content=u_content, **uresp)
else:
module.fail_json(**uresp)
elif return_content:
module.exit_json(content=u_content, **uresp)
else:
module.exit_json(**uresp)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,709 |
JSON content type detection for uri, no longer working for application/ld+json
|
### Summary
It looks like the PR #79719 broke the support for the automatic json conversion for the `application/ld+json` and `vnd.api+json` (JSON:API spec) response header.
The earlier check: `any(candidate in sub_type for candidate in JSON_CANDIDATES)`, would return `true` for sub_type `ld+json`. However after the change in this commit, `sub_type.lower() in JSON_CANDIDATES` would return `false` for sub_type `ld+json`.
This causes the automatic loading into a key called json in the dictionary results, to not happen anymore, which of course resulted into an error in our ansible playbook: `'dict object' has no attribute 'json'`
Since `application/ld+json` seems to be a globally acknowledged content type, I would assume this should keep working.
### Issue Type
Bug Report
### Component Name
uri
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = False
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Windows / WSL2 / Ubuntu 20.04.5 LTS
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: "Perform task"
ansible.builtin.uri:
url: "http://some_url_with_json_api_spec_formatted_response"
method: "POST"
body: "{{ lookup('template', './templates/some_template') }}"
body_format: json
status_code: 201
headers:
Content-Type: "application/vnd.api+json"
register: json_api_spec_output
- name: "Record id"
set_fact:
response_id: "{{ json_api_spec_output.json.data.id }}"
```
### Expected Results
The id is saved into the response_id variable.
### Actual Results
```console
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'json'.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80709
|
https://github.com/ansible/ansible/pull/80745
|
47539a19ea9bcc573424c01336acf8b247d10d10
|
0c7361d9acf7c8966a09f67de2a8679ef86fd856
| 2023-05-03T13:07:37Z |
python
| 2023-05-23T14:38:05Z |
test/integration/targets/uri/tasks/main.yml
|
# test code for the uri module
# (c) 2014, Leonid Evdokimov <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: set role facts
set_fact:
http_port: 15260
files_dir: '{{ remote_tmp_dir|expanduser }}/files'
checkout_dir: '{{ remote_tmp_dir }}/git'
- name: create a directory to serve files from
file:
dest: "{{ files_dir }}"
state: directory
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=4 format=pass%d.json
- copy:
src: "{{ item }}"
dest: "{{files_dir}}/{{ item }}"
with_sequence: start=0 end=30 format=fail%d.json
- copy:
src: "testserver.py"
dest: "{{ remote_tmp_dir }}/testserver.py"
- name: start SimpleHTTPServer
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }}
async: 180 # this test is slower on remotes like FreeBSD, and running split slows it down further
poll: 0
- wait_for: port={{ http_port }}
- name: checksum pass_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: pass_checksum
with_sequence: start=0 end=4 format=pass%d
- name: fetch pass_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fetch_pass_json
with_sequence: start=0 end=4 format=pass%d
- name: check pass_json
assert:
that:
- '"json" in item.1'
- item.0.stat.checksum == item.1.content | checksum
with_together:
- "{{pass_checksum.results}}"
- "{{fetch_pass_json.results}}"
- name: checksum fail_json
stat: path={{ files_dir }}/{{ item }}.json get_checksum=yes
register: fail_checksum
with_sequence: start=0 end=30 format=fail%d
- name: fetch fail_json
uri: return_content=yes url=http://localhost:{{ http_port }}/{{ item }}.json
register: fail
with_sequence: start=0 end=30 format=fail%d
- name: check fail_json
assert:
that:
- item.0.stat.checksum == item.1.content | checksum
- '"json" not in item.1'
with_together:
- "{{fail_checksum.results}}"
- "{{fail.results}}"
- name: test https fetch to a site with mismatched hostname and certificate
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ remote_tmp_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- result.failed == true
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or (result.msg is match('hostname .* doesn.t match .*'))"
- stat_result.stat.exists == false
- result.status is defined
- result.status == -1
- result.url == 'https://' ~ badssl_host ~ '/'
- name: Clean up any cruft from the results directory
file:
name: "{{ remote_tmp_dir }}/kreitz.html"
state: absent
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
uri:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/kreitz.html"
validate_certs: no
register: result
- stat:
path: "{{ remote_tmp_dir }}/kreitz.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- "stat_result.stat.exists == true"
- "result.changed == true"
- name: "get ca certificate {{ self_signed_host }}"
uri:
url: "http://{{ httpbin_host }}/ca2cert.pem"
dest: "{{ remote_tmp_dir }}/ca2cert.pem"
- name: test https fetch to a site with self signed certificate using ca_path
uri:
url: "https://{{ self_signed_host }}:444/"
dest: "{{ remote_tmp_dir }}/self-signed_using_ca_path.html"
ca_path: "{{ remote_tmp_dir }}/ca2cert.pem"
validate_certs: yes
register: result
- stat:
path: "{{ remote_tmp_dir }}/self-signed_using_ca_path.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- "stat_result.stat.exists == true"
- "result.changed == true"
- name: test https fetch to a site with self signed certificate without using ca_path
uri:
url: "https://{{ self_signed_host }}:444/"
dest: "{{ remote_tmp_dir }}/self-signed-without_using_ca_path.html"
validate_certs: yes
register: result
ignore_errors: true
- stat:
path: "{{ remote_tmp_dir }}/self-signed-without_using_ca_path.html"
register: stat_result
- name: Assure that https access to a host with self-signed certificate without providing ca_path fails
assert:
that:
- "stat_result.stat.exists == false"
- result is failed
- "'certificate verify failed' in result.msg"
- name: Locate ca-bundle
stat:
path: '{{ item }}'
loop:
- /etc/ssl/certs/ca-bundle.crt
- /etc/ssl/certs/ca-certificates.crt
- /var/lib/ca-certificates/ca-bundle.pem
- /usr/local/share/certs/ca-root-nss.crt
- '{{ cafile_path.stdout_lines|default(["/_i_dont_exist_ca.pem"])|first }}'
- /etc/ssl/cert.pem
register: ca_bundle_candidates
- name: Test that ca_path can be a full bundle
uri:
url: "https://{{ httpbin_host }}/get"
ca_path: '{{ ca_bundle }}'
vars:
ca_bundle: '{{ ca_bundle_candidates.results|selectattr("stat.exists")|map(attribute="item")|first }}'
- name: test redirect without follow_redirects
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
follow_redirects: 'none'
status_code: 302
register: result
- name: Assert location header
assert:
that:
- 'result.location|default("") == "https://{{ httpbin_host }}/relative-redirect/1"'
- name: Check SSL with redirect
uri:
url: 'https://{{ httpbin_host }}/redirect/2'
register: result
- name: Assert SSL with redirect
assert:
that:
- 'result.url|default("") == "https://{{ httpbin_host }}/get"'
- name: redirect to bad SSL site
uri:
url: 'http://{{ badssl_host }}'
register: result
ignore_errors: true
- name: Ensure bad SSL site reidrect fails
assert:
that:
- result is failed
- 'badssl_host in result.msg'
- name: test basic auth
uri:
url: 'https://{{ httpbin_host }}/basic-auth/user/passwd'
user: user
password: passwd
- name: test basic forced auth
uri:
url: 'https://{{ httpbin_host }}/hidden-basic-auth/user/passwd'
force_basic_auth: true
user: user
password: passwd
- name: test digest auth
uri:
url: 'https://{{ httpbin_host }}/digest-auth/auth/user/passwd'
user: user
password: passwd
headers:
Cookie: "fake=fake_value"
- name: test digest auth failure
uri:
url: 'https://{{ httpbin_host }}/digest-auth/auth/user/passwd'
user: user
password: wrong
headers:
Cookie: "fake=fake_value"
register: result
failed_when: result.status != 401
- name: test unredirected_headers
uri:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
user: user
password: passwd
force_basic_auth: true
unredirected_headers:
- authorization
ignore_errors: true
register: unredirected_headers
- name: test omitting unredirected headers
uri:
url: 'https://{{ httpbin_host }}/redirect-to?status_code=301&url=/basic-auth/user/passwd'
user: user
password: passwd
force_basic_auth: true
register: redirected_headers
- name: ensure unredirected_headers caused auth to fail
assert:
that:
- unredirected_headers is failed
- unredirected_headers.status == 401
- redirected_headers is successful
- redirected_headers.status == 200
- name: test PUT
uri:
url: 'https://{{ httpbin_host }}/put'
method: PUT
body: 'foo=bar'
- name: test OPTIONS
uri:
url: 'https://{{ httpbin_host }}/'
method: OPTIONS
register: result
- name: Assert we got an allow header
assert:
that:
- 'result.allow.split(", ")|sort == ["GET", "HEAD", "OPTIONS"]'
- name: Testing support of https_proxy (with failure expected)
environment:
https_proxy: 'https://localhost:3456'
uri:
url: 'https://{{ httpbin_host }}/get'
register: result
ignore_errors: true
- assert:
that:
- result is failed
- result.status == -1
- name: Testing use_proxy=no is honored
environment:
https_proxy: 'https://localhost:3456'
uri:
url: 'https://{{ httpbin_host }}/get'
use_proxy: no
# Ubuntu12.04 doesn't have python-urllib3, this makes handling required dependencies a pain across all variations
# We'll use this to just skip 12.04 on those tests. We should be sufficiently covered with other OSes and versions
- name: Set fact if running on Ubuntu 12.04
set_fact:
is_ubuntu_precise: "{{ ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise' }}"
- name: Test that SNI succeeds on python versions that have SNI
uri:
url: 'https://{{ sni_host }}/'
return_content: true
when: ansible_python.has_sslcontext
register: result
- name: Assert SNI verification succeeds on new python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: ansible_python.has_sslcontext
- name: Verify SNI verification fails on old python without urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
ignore_errors: true
when: not ansible_python.has_sslcontext
register: result
- name: Assert SNI verification fails on old python
assert:
that:
- result is failed
when: result is not skipped
- name: check if urllib3 is installed as an OS package
package:
name: "{{ uri_os_packages[ansible_os_family].urllib3 }}"
check_mode: yes
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default
register: urllib3
- name: uninstall conflicting urllib3 pip package
pip:
name: urllib3
state: absent
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool and uri_os_packages[ansible_os_family].urllib3|default and urllib3.changed
- name: install OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install python modules for Older Python SNI verification
pip:
name: "{{ item }}"
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall ndg-httpsclient
pip:
name: "{{ item }}"
state: absent
with_items:
- ndg-httpsclient
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: uninstall OS packages that are needed for SNI on old python
package:
name: "{{ item }}"
state: absent
with_items: "{{ uri_os_packages[ansible_os_family].step1 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install OS packages that are needed for building cryptography
package:
name: "{{ item }}"
with_items: "{{ uri_os_packages[ansible_os_family].step2 | default([]) }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: create constraints path
set_fact:
remote_constraints: "{{ remote_tmp_dir }}/constraints.txt"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: create constraints file
copy:
content: |
cryptography == 2.1.4
idna == 2.5
pyopenssl == 17.5.0
six == 1.13.0
urllib3 == 1.23
dest: "{{ remote_constraints }}"
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: install urllib3 and pyopenssl via pip
pip:
name: "{{ item }}"
extra_args: "-c {{ remote_constraints }}"
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Verify SNI verification succeeds on old python with pip urllib3 contrib
uri:
url: 'https://{{ sni_host }}'
return_content: true
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
register: result
- name: Assert SNI verification succeeds on old python with pip urllib3 contrib
assert:
that:
- result is successful
- 'sni_host in result.content'
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: Uninstall urllib3 and PyOpenSSL
pip:
name: "{{ item }}"
state: absent
with_items:
- urllib3
- PyOpenSSL
when: not ansible_python.has_sslcontext and not is_ubuntu_precise|bool
- name: validate the status_codes are correct
uri:
url: "https://{{ httpbin_host }}/status/202"
status_code: 202
method: POST
body: foo
- name: Validate body_format json does not override content-type in 2.3 or newer
uri:
url: "https://{{ httpbin_host }}/post"
method: POST
body:
foo: bar
body_format: json
headers:
'Content-Type': 'text/json'
return_content: true
register: result
failed_when: result.json.headers['Content-Type'] != 'text/json'
- name: Validate body_format form-urlencoded using dicts works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
user: foo
password: bar!#@ |&82$M
submit: Sign in
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded dict input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded using lists works
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- [ user, foo ]
- [ password, bar!#@ |&82$M ]
- [ submit, Sign in ]
body_format: form-urlencoded
return_content: yes
register: result
- name: Assert form-urlencoded list input
assert:
that:
- result is successful
- result.json.headers['Content-Type'] == 'application/x-www-form-urlencoded'
- result.json.form.password == 'bar!#@ |&82$M'
- name: Validate body_format form-urlencoded of invalid input fails
uri:
url: https://{{ httpbin_host }}/post
method: POST
body:
- foo
- bar: baz
body_format: form-urlencoded
return_content: yes
register: result
ignore_errors: yes
- name: Assert invalid input fails
assert:
that:
- result is failure
- "'failed to parse body as form_urlencoded: too many values to unpack' in result.msg"
- name: multipart/form-data
uri:
url: https://{{ httpbin_host }}/post
method: POST
body_format: form-multipart
body:
file1:
filename: formdata.txt
file2:
content: text based file content
filename: fake.txt
mime_type: text/plain
text_form_field1: value1
text_form_field2:
content: value2
mime_type: text/plain
register: multipart
- name: Assert multipart/form-data
assert:
that:
- multipart.json.files.file1 == '_multipart/form-data_\n'
- multipart.json.files.file2 == 'text based file content'
- multipart.json.form.text_form_field1 == 'value1'
- multipart.json.form.text_form_field2 == 'value2'
# https://github.com/ansible/ansible/issues/74276 - verifies we don't have a traceback
- name: multipart/form-data with invalid value
uri:
url: https://{{ httpbin_host }}/post
method: POST
body_format: form-multipart
body:
integer_value: 1
register: multipart_invalid
failed_when: 'multipart_invalid.msg != "failed to parse body as form-multipart: value must be a string, or mapping, cannot be type int"'
- name: Validate invalid method
uri:
url: https://{{ httpbin_host }}/anything
method: UNKNOWN
register: result
ignore_errors: yes
- name: Assert invalid method fails
assert:
that:
- result is failure
- result.status == 405
- "'METHOD NOT ALLOWED' in result.msg"
- name: Test client cert auth, no certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
status_code: 200
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:NONE"
when: has_httptester
- name: Test client cert auth, with certs
uri:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with no validation
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: no
register: result
failed_when: result.content != "ansible.http.tests:SUCCESS"
when: has_httptester
- name: Test client cert auth, with validation and ssl mismatch
uri:
url: "https://fail.ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
return_content: true
validate_certs: yes
register: result
failed_when: result is not failed
when: has_httptester
- uri:
url: https://{{ httpbin_host }}/response-headers?Set-Cookie=Foo%3Dbar&Set-Cookie=Baz%3Dqux
register: result
- assert:
that:
- result['set_cookie'] == 'Foo=bar, Baz=qux'
# Python 3.10 and earlier sorts cookies in order of most specific (ie. longest) path first
# items with the same path are reversed from response order
- result['cookies_string'] == 'Baz=qux; Foo=bar'
when: ansible_python_version is version('3.11', '<')
- assert:
that:
- result['set_cookie'] == 'Foo=bar, Baz=qux'
# Python 3.11 no longer sorts cookies.
# See: https://github.com/python/cpython/issues/86232
- result['cookies_string'] == 'Foo=bar; Baz=qux'
when: ansible_python_version is version('3.11', '>=')
- name: Write out netrc template
template:
src: netrc.j2
dest: "{{ remote_tmp_dir }}/netrc"
- name: Test netrc with port
uri:
url: "https://{{ httpbin_host }}:443/basic-auth/user/passwd"
environment:
NETRC: "{{ remote_tmp_dir }}/netrc"
- name: Test JSON POST with src
uri:
url: "https://{{ httpbin_host}}/post"
src: pass0.json
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Copy file pass0.json to remote
copy:
src: "{{ role_path }}/files/pass0.json"
dest: "{{ remote_tmp_dir }}/pass0.json"
- name: Test JSON POST with src and remote_src=True
uri:
url: "https://{{ httpbin_host}}/post"
src: "{{ remote_tmp_dir }}/pass0.json"
remote_src: true
method: POST
return_content: true
body_format: json
register: result
- name: Validate POST with src and remote_src=True works
assert:
that:
- result.json.json[0] == 'JSON Test Pattern pass1'
- name: Make request that includes password in JSON keys
uri:
url: "https://{{ httpbin_host}}/get?key-password=value-password"
user: admin
password: password
register: sanitize_keys
- name: assert that keys were sanitized
assert:
that:
- sanitize_keys.json.args['key-********'] == 'value-********'
- name: Test gzip encoding
uri:
url: "https://{{ httpbin_host }}/gzip"
register: result
- name: Validate gzip decoding
assert:
that:
- result.json.gzipped
- name: test gzip encoding no auto decompress
uri:
url: "https://{{ httpbin_host }}/gzip"
decompress: false
register: result
- name: Assert gzip wasn't decompressed
assert:
that:
- result.json is undefined
- name: Create a testing file
copy:
content: "content"
dest: "{{ remote_tmp_dir }}/output"
- name: Download a file from non existing location
uri:
url: http://does/not/exist
dest: "{{ remote_tmp_dir }}/output"
ignore_errors: yes
- name: Save testing file's output
command: "cat {{ remote_tmp_dir }}/output"
register: file_out
- name: Test the testing file was not overwritten
assert:
that:
- "'content' in file_out.stdout"
- name: Clean up
file:
dest: "{{ remote_tmp_dir }}/output"
state: absent
- name: Test follow_redirects=none
import_tasks: redirect-none.yml
- name: Test follow_redirects=safe
import_tasks: redirect-safe.yml
- name: Test follow_redirects=urllib2
import_tasks: redirect-urllib2.yml
- name: Test follow_redirects=all
import_tasks: redirect-all.yml
- name: Check unexpected failures
import_tasks: unexpected-failures.yml
- name: Check return-content
import_tasks: return-content.yml
- name: Test use_gssapi=True
include_tasks:
file: use_gssapi.yml
apply:
environment:
KRB5_CONFIG: '{{ krb5_config }}'
KRB5CCNAME: FILE:{{ remote_tmp_dir }}/krb5.cc
when: krb5_config is defined
- name: Test ciphers
import_tasks: ciphers.yml
- name: Test use_netrc.yml
import_tasks: use_netrc.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,710 |
Unarchive module fails for non-zip archives in upload mode with "Failed to find handler"
|
### Summary
When uploading a non-zip archive (a `.tar.zst` in my case) the `unarchive` module fails to detect the correct unarchive handler for the uploaded file. It fails with the message `Failed to find handler for ...`.
As far as I understand it, the temporary file name for the uploaded file is simply `source`, it lacks any kind of extension despite that the source file has a `.tar.zst` extension. Thus, it then can't detect the correct unarchiver (`zstd` in this case), falls back to the `unzip` unarchiver which in turn fails to extract files from a zstandard archive.
I could workaround this issue by changing my role to first copy the archive via the `copy` module and then using the `unarchive` module to extract the archive.
Here is the full error message:
```
Failed to find handler for "./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source". Make sure the required command to extract the file is installed.
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/unzip" could not handle archive: End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source or
./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.ZIP, period.
```
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/Users/<redacted>/configuration-ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/factcache
CACHE_PLUGIN_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
COLOR_CHANGED(/Users/<redacted>/configuration-ansible/ansible.cfg) = yellow
COLOR_DEBUG(/Users/<redacted>/configuration-ansible/ansible.cfg) = dark gray
COLOR_DEPRECATE(/Users/<redacted>/configuration-ansible/ansible.cfg) = purple
COLOR_DIFF_ADD(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_DIFF_LINES(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_ERROR(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_HIGHLIGHT(/Users/<redacted>/configuration-ansible/ansible.cfg) = white
COLOR_OK(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_SKIP(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_UNREACHABLE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_VERBOSE(/Users/<redacted>/configuration-ansible/ansible.cfg) = blue
COLOR_WARN(/Users/<redacted>/configuration-ansible/ansible.cfg) = bright purple
CONFIG_FILE() = /Users/<redacted>/configuration-ansible/ansible.cfg
DEFAULT_FORKS(/Users/<redacted>/configuration-ansible/ansible.cfg) = 60
DEFAULT_GATHERING(/Users/<redacted>/configuration-ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/inventory.ini']
DEFAULT_LOCAL_TMP(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-55791surhrytk
DEFAULT_LOG_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/ansible.log
DEFAULT_MANAGED_STR(/Users/<redacted>/configuration-ansible/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DEFAULT_MODULE_NAME(/Users/<redacted>/configuration-ansible/ansible.cfg) = shell
DEFAULT_MODULE_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/library']
DEFAULT_ROLES_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/roles']
DEFAULT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/Users/<redacted>/configuration-ansible/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/vault_password.txt
HOST_KEY_CHECKING(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 15
PERSISTENT_CONNECT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/<redacted>/configuration-ansible/ansible.cfg) = never
CACHE:
=====
jsonfile:
________
_timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
_uri(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/factcache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/tmp
```
### OS / Environment
macOS Monterey 12.6.5 (21G531)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy and extract file
ansible.builtin.unarchive:
copy: true
src: "/foo/bar.tar.zst"
dest: "/bar/baz"
owner: root
group: root
```
### Expected Results
I've expected the `unarchive` module to pick the correct unarchiver based on the file ending analogous to how `tar` does it, independently of whether the source archive is on the controller node or on the target node.
### Actual Results
```console
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/<redacted>/configuration-ansible/ansible.cfg as config file
host_list declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
Parsed /Users/<redacted>/configuration-ansible/inventory.ini inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ./.ansible/tmp `"&& mkdir "` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" && echo ansible-tmp-1683129847.767861-56453-260060231192436="` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" ) && sleep 0'"'"''
<somehost> (0, b'ansible-tmp-1683129847.767861-56453-260060231192436=./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436\n', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/stat.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=dxuukajwsxhaooufoilpztdkfumzzovq] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dxuukajwsxhaooufoilpztdkfumzzovq ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (0, b'\r\n\r\n{"changed": false, "stat": {"exists": true, "path": "/bar/baz", "mode": "0700", "isdir": true, "ischr": false, "isblk": false, "isreg": false, "isfifo": false, "islnk": false, "issock": false, "uid": 0, "gid": 0, "size": 4096, "inode": 795747, "dev": 2050, "nlink": 2, "atime": 1683129847.686823, "mtime": 1683129847.686823, "ctime": 1683129847.686823, "wusr": true, "rusr": true, "xusr": true, "wgrp": false, "rgrp": false, "xgrp": false, "woth": false, "roth": false, "xoth": false, "isuid": false, "isgid": false, "blocks": 8, "block_size": 4096, "device_type": 0, "readable": true, "writeable": true, "executable": true, "pw_name": "root", "gr_name": "root", "mimetype": "inode/directory", "charset": "binary", "version": "514766525", "attributes": ["extents"], "attr_flags": "e"}, "invocation": {"module_args": {"path": "/bar/baz", "follow": true, "get_checksum": true, "checksum_algorithm": "sha1", "get_md5": false, "get_mime": true, "get_attributes": true}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> PUT /foo/bar.tar.zst TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /foo/bar.tar.zst ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source && sleep 0'"'"''
<somehost> (0, b'', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/unarchive.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=xvgbopenlqbmbyczywzmbmtpyhqvltbb] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xvgbopenlqbmbyczywzmbmtpyhqvltbb ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (1, b'\r\n\r\n{"failed": true, "msg": "Failed to find handler for \\"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\\". Make sure the required command to extract the file is installed.\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\ntar (child): Error is not recoverable: exiting now\\n/usr/bin/tar: Child returned status 2\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\\nCommand \\"/usr/bin/unzip\\" could not handle archive: End-of-central-directory signature not found. Either this file is not\\n a zipfile, or it constitutes one disk of a multi-part archive. In the\\n latter case the central directory and zipfile comment will be found on\\n the last disk(s) of this archive.\\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\\n\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\n/usr/bin/tar: Error is not recoverable: exiting now\\n", "invocation": {"module_args": {"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source", "dest": "/bar/baz", "owner": "root", "group": "root", "remote_src": false, "list_files": false, "keep_newer": false, "exclude": [], "include": [], "extra_opts": [], "validate_certs": true, "io_buffer_size": 65536, "copy": true, "decrypt": true, "unsafe_writes": false, "creates": null, "mode": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> Failed to connect to the host via ssh: Shared connection to somehost closed.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'rm -f -r ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ > /dev/null 2>&1 && sleep 0'"'"''
<somehost> (0, b'', b'')
somehost | FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"copy": true,
"creates": null,
"decrypt": true,
"dest": "/bar/baz",
"exclude": [],
"extra_opts": [],
"group": "root",
"include": [],
"io_buffer_size": 65536,
"keep_newer": false,
"list_files": false,
"mode": null,
"owner": "root",
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source",
"unsafe_writes": false,
"validate_certs": true
}
},
"msg": "Failed to find handler for \"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\n/usr/bin/tar: Error is not recoverable: exiting now\n"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80710
|
https://github.com/ansible/ansible/pull/80738
|
86e7cd57b745f13a050f0650197a400ed67fb155
|
09b4cae4fb1d3f8ddf6effd8f3841f1e4ed48114
| 2023-05-03T16:13:04Z |
python
| 2023-05-24T15:56:37Z |
changelogs/fragments/80738-abs-unarachive-src.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,710 |
Unarchive module fails for non-zip archives in upload mode with "Failed to find handler"
|
### Summary
When uploading a non-zip archive (a `.tar.zst` in my case) the `unarchive` module fails to detect the correct unarchive handler for the uploaded file. It fails with the message `Failed to find handler for ...`.
As far as I understand it, the temporary file name for the uploaded file is simply `source`, it lacks any kind of extension despite that the source file has a `.tar.zst` extension. Thus, it then can't detect the correct unarchiver (`zstd` in this case), falls back to the `unzip` unarchiver which in turn fails to extract files from a zstandard archive.
I could workaround this issue by changing my role to first copy the archive via the `copy` module and then using the `unarchive` module to extract the archive.
Here is the full error message:
```
Failed to find handler for "./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source". Make sure the required command to extract the file is installed.
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/unzip" could not handle archive: End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source or
./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.ZIP, period.
```
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/Users/<redacted>/configuration-ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/factcache
CACHE_PLUGIN_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
COLOR_CHANGED(/Users/<redacted>/configuration-ansible/ansible.cfg) = yellow
COLOR_DEBUG(/Users/<redacted>/configuration-ansible/ansible.cfg) = dark gray
COLOR_DEPRECATE(/Users/<redacted>/configuration-ansible/ansible.cfg) = purple
COLOR_DIFF_ADD(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_DIFF_LINES(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_ERROR(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_HIGHLIGHT(/Users/<redacted>/configuration-ansible/ansible.cfg) = white
COLOR_OK(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_SKIP(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_UNREACHABLE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_VERBOSE(/Users/<redacted>/configuration-ansible/ansible.cfg) = blue
COLOR_WARN(/Users/<redacted>/configuration-ansible/ansible.cfg) = bright purple
CONFIG_FILE() = /Users/<redacted>/configuration-ansible/ansible.cfg
DEFAULT_FORKS(/Users/<redacted>/configuration-ansible/ansible.cfg) = 60
DEFAULT_GATHERING(/Users/<redacted>/configuration-ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/inventory.ini']
DEFAULT_LOCAL_TMP(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-55791surhrytk
DEFAULT_LOG_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/ansible.log
DEFAULT_MANAGED_STR(/Users/<redacted>/configuration-ansible/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DEFAULT_MODULE_NAME(/Users/<redacted>/configuration-ansible/ansible.cfg) = shell
DEFAULT_MODULE_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/library']
DEFAULT_ROLES_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/roles']
DEFAULT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/Users/<redacted>/configuration-ansible/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/vault_password.txt
HOST_KEY_CHECKING(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 15
PERSISTENT_CONNECT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/<redacted>/configuration-ansible/ansible.cfg) = never
CACHE:
=====
jsonfile:
________
_timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
_uri(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/factcache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/tmp
```
### OS / Environment
macOS Monterey 12.6.5 (21G531)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy and extract file
ansible.builtin.unarchive:
copy: true
src: "/foo/bar.tar.zst"
dest: "/bar/baz"
owner: root
group: root
```
### Expected Results
I've expected the `unarchive` module to pick the correct unarchiver based on the file ending analogous to how `tar` does it, independently of whether the source archive is on the controller node or on the target node.
### Actual Results
```console
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/<redacted>/configuration-ansible/ansible.cfg as config file
host_list declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
Parsed /Users/<redacted>/configuration-ansible/inventory.ini inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ./.ansible/tmp `"&& mkdir "` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" && echo ansible-tmp-1683129847.767861-56453-260060231192436="` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" ) && sleep 0'"'"''
<somehost> (0, b'ansible-tmp-1683129847.767861-56453-260060231192436=./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436\n', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/stat.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=dxuukajwsxhaooufoilpztdkfumzzovq] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dxuukajwsxhaooufoilpztdkfumzzovq ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (0, b'\r\n\r\n{"changed": false, "stat": {"exists": true, "path": "/bar/baz", "mode": "0700", "isdir": true, "ischr": false, "isblk": false, "isreg": false, "isfifo": false, "islnk": false, "issock": false, "uid": 0, "gid": 0, "size": 4096, "inode": 795747, "dev": 2050, "nlink": 2, "atime": 1683129847.686823, "mtime": 1683129847.686823, "ctime": 1683129847.686823, "wusr": true, "rusr": true, "xusr": true, "wgrp": false, "rgrp": false, "xgrp": false, "woth": false, "roth": false, "xoth": false, "isuid": false, "isgid": false, "blocks": 8, "block_size": 4096, "device_type": 0, "readable": true, "writeable": true, "executable": true, "pw_name": "root", "gr_name": "root", "mimetype": "inode/directory", "charset": "binary", "version": "514766525", "attributes": ["extents"], "attr_flags": "e"}, "invocation": {"module_args": {"path": "/bar/baz", "follow": true, "get_checksum": true, "checksum_algorithm": "sha1", "get_md5": false, "get_mime": true, "get_attributes": true}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> PUT /foo/bar.tar.zst TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /foo/bar.tar.zst ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source && sleep 0'"'"''
<somehost> (0, b'', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/unarchive.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=xvgbopenlqbmbyczywzmbmtpyhqvltbb] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xvgbopenlqbmbyczywzmbmtpyhqvltbb ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (1, b'\r\n\r\n{"failed": true, "msg": "Failed to find handler for \\"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\\". Make sure the required command to extract the file is installed.\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\ntar (child): Error is not recoverable: exiting now\\n/usr/bin/tar: Child returned status 2\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\\nCommand \\"/usr/bin/unzip\\" could not handle archive: End-of-central-directory signature not found. Either this file is not\\n a zipfile, or it constitutes one disk of a multi-part archive. In the\\n latter case the central directory and zipfile comment will be found on\\n the last disk(s) of this archive.\\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\\n\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\n/usr/bin/tar: Error is not recoverable: exiting now\\n", "invocation": {"module_args": {"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source", "dest": "/bar/baz", "owner": "root", "group": "root", "remote_src": false, "list_files": false, "keep_newer": false, "exclude": [], "include": [], "extra_opts": [], "validate_certs": true, "io_buffer_size": 65536, "copy": true, "decrypt": true, "unsafe_writes": false, "creates": null, "mode": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> Failed to connect to the host via ssh: Shared connection to somehost closed.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'rm -f -r ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ > /dev/null 2>&1 && sleep 0'"'"''
<somehost> (0, b'', b'')
somehost | FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"copy": true,
"creates": null,
"decrypt": true,
"dest": "/bar/baz",
"exclude": [],
"extra_opts": [],
"group": "root",
"include": [],
"io_buffer_size": 65536,
"keep_newer": false,
"list_files": false,
"mode": null,
"owner": "root",
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source",
"unsafe_writes": false,
"validate_certs": true
}
},
"msg": "Failed to find handler for \"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\n/usr/bin/tar: Error is not recoverable: exiting now\n"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80710
|
https://github.com/ansible/ansible/pull/80738
|
86e7cd57b745f13a050f0650197a400ed67fb155
|
09b4cae4fb1d3f8ddf6effd8f3841f1e4ed48114
| 2023-05-03T16:13:04Z |
python
| 2023-05-24T15:56:37Z |
lib/ansible/modules/unarchive.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
- The given path must exist. Base directory is not created by this module.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
- The specified absolute path (file or directory) must be below the base path given with C(dest:).
type: path
version_added: "1.6"
io_buffer_size:
description:
- Size of the volatile memory buffer that is used for extracting files from the archive in bytes.
type: int
default: 65536
version_added: "2.12"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
- Mutually exclusive with C(include).
type: list
default: []
elements: str
version_added: "2.1"
include:
description:
- List of directory and file entries that you would like to extract from the archive. If C(include)
is not empty, only files listed here will be extracted.
- Mutually exclusive with C(exclude).
type: list
default: []
elements: str
version_added: "2.11"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: []
version_added: "2.1"
remote_src:
description:
- Set to C(true) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(false) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(true).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
- action_common_attributes.files
- decrypt
- files
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: partial
details: Not supported for gzipped tar files.
diff_mode:
support: partial
details: Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive.
platform:
platforms: posix
safe_file_operations:
support: none
vault:
support: full
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Requires C(zstd) command on target host to expand I(.tar.zst) files.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
ansible.builtin.unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
RETURN = r'''
dest:
description: Path to the destination directory.
returned: always
type: str
sample: /opt/software
files:
description: List of all the files in the archive.
returned: When I(list_files) is True
type: list
sample: '["file1", "file2"]'
gid:
description: Numerical ID of the group that owns the destination directory.
returned: always
type: int
sample: 1000
group:
description: Name of the group that owns the destination directory.
returned: always
type: str
sample: "librarians"
handler:
description: Archive software handler used to extract and decompress the archive.
returned: always
type: str
sample: "TgzArchive"
mode:
description: String that represents the octal permissions of the destination directory.
returned: always
type: str
sample: "0755"
owner:
description: Name of the user that owns the destination directory.
returned: always
type: str
sample: "paul"
size:
description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within.
returned: always
type: int
sample: 36
src:
description:
- The source archive's path.
- If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored.
returned: always
type: str
sample: "/home/paul/test.tar.gz"
state:
description: State of the destination. Effectively always "directory".
returned: always
type: str
sample: "directory"
uid:
description: Numerical ID of the user that owns the destination directory.
returned: always
type: int
sample: 1000
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from functools import partial
from zipfile import ZipFile
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_file
try: # python 3.3+
from shlex import quote # type: ignore[attr-defined]
except ImportError: # older python
from pipes import quote
try: # python 3.2+
from zipfile import BadZipFile # type: ignore[attr-defined]
except ImportError: # older python
from zipfile import BadZipfile as BadZipFile
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path, buffer_size):
''' Return a CRC32 checksum of a file '''
crc = binascii.crc32(b'')
with open(path, 'rb') as f:
for b_block in iter(partial(f.read, buffer_size), b''):
crc = binascii.crc32(b_block, crc)
return crc & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.io_buffer_size = module.params["io_buffer_size"]
self.excludes = module.params['exclude']
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
self.zipinfo_cmd_path = None
self._files_in_archive = []
self._infodict = dict()
self.zipinfoflag = ''
self.binaries = (
('unzip', 'cmd_path'),
('zipinfo', 'zipinfo_cmd_path'),
)
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
rc, out, err = self.module.run_command([self.cmd_path, '-v', self.src])
if rc:
self.module.debug(err)
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipFile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipFile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
if self.include_files:
for include in self.include_files:
if fnmatch.fnmatch(member, include):
self._files_in_archive.append(to_native(member))
else:
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception as e:
archive.close()
raise UnarchiveError('Unable to list files in the archive: %s' % to_native(e))
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
if self.zipinfoflag:
cmd = [self.zipinfo_cmd_path, self.zipinfoflag, '-T', '-s', self.src]
else:
cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
rc, out, err = self.module.run_command(cmd)
self.module.debug(err)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest, self.io_buffer_size)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
missing = []
for b in self.binaries:
try:
setattr(self, b[1], get_bin_path(b[0]))
except ValueError:
missing.append(b[0])
if missing:
return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
self.module.debug(err)
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, err)
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
self.include_files = self.module.params['include']
self.cmd_path = None
self.tar_type = None
self.zipflag = '-z'
self._files_in_archive = []
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
if rc != 0:
self.module.debug(err)
raise UnarchiveError('Unable to list files in the archive: %s' % err)
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
try:
self.cmd_path = get_bin_path('gtar')
except ValueError:
# Fallback to tar
try:
self.cmd_path = get_bin_path('tar')
except ValueError:
return False, "Unable to find required 'gtar' or 'tar' binary in the path"
self.tar_type = self._get_tar_type()
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError as e:
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, to_native(e))
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# Class to handle zstd compressed tar files
class TarZstdArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarZstdArchive, self).__init__(src, b_dest, file_args, module)
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
#
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
self.zipflag = '--use-compress-program=zstd'
class ZipZArchive(ZipArchive):
def __init__(self, src, b_dest, file_args, module):
super(ZipZArchive, self).__init__(src, b_dest, file_args, module)
self.zipinfoflag = '-Z'
self.binaries = (
('unzip', 'cmd_path'),
('unzip', 'zipinfo_cmd_path'),
)
def can_handle_archive(self):
unzip_available, error_msg = super(ZipZArchive, self).can_handle_archive()
if not unzip_available:
return unzip_available, error_msg
# Ensure unzip -Z is available before we use it in is_unarchive
cmd = [self.zipinfo_cmd_path, self.zipinfoflag]
rc, out, err = self.module.run_command(cmd)
if 'zipinfo' in out.lower():
return True, None
return False, 'Command "unzip -Z" could not handle archive: %s' % err
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, ZipZArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = '\n'.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed.\n%s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
include=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
io_buffer_size=dict(type='int', default=64 * 1024),
# Options that are for the action plugin, but ignored by the module itself.
# We have them here so that the sanity tests pass without ignores, which
# reduces the likelihood of further bugs added.
copy=dict(type='bool', default=True),
decrypt=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
mutually_exclusive=[('include', 'exclude')],
)
src = module.params['src']
dest = module.params['dest']
abs_dest = os.path.abspath(dest)
b_dest = to_bytes(abs_dest, errors='surrogate_or_strict')
if not os.path.isabs(dest):
module.warn("Relative destination path '{dest}' was resolved to absolute path '{abs_dest}'.".format(dest=dest, abs_dest=abs_dest))
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
top_folders = []
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if '/' in filename:
top_folder_path = filename.split('/')[0]
if top_folder_path not in top_folders:
top_folders.append(top_folder_path)
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
if top_folders:
for f in top_folders:
file_args['path'] = "%s/%s" % (dest, f)
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,710 |
Unarchive module fails for non-zip archives in upload mode with "Failed to find handler"
|
### Summary
When uploading a non-zip archive (a `.tar.zst` in my case) the `unarchive` module fails to detect the correct unarchive handler for the uploaded file. It fails with the message `Failed to find handler for ...`.
As far as I understand it, the temporary file name for the uploaded file is simply `source`, it lacks any kind of extension despite that the source file has a `.tar.zst` extension. Thus, it then can't detect the correct unarchiver (`zstd` in this case), falls back to the `unzip` unarchiver which in turn fails to extract files from a zstandard archive.
I could workaround this issue by changing my role to first copy the archive via the `copy` module and then using the `unarchive` module to extract the archive.
Here is the full error message:
```
Failed to find handler for "./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source". Make sure the required command to extract the file is installed.
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/unzip" could not handle archive: End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source or
./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.ZIP, period.
```
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/Users/<redacted>/configuration-ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/factcache
CACHE_PLUGIN_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
COLOR_CHANGED(/Users/<redacted>/configuration-ansible/ansible.cfg) = yellow
COLOR_DEBUG(/Users/<redacted>/configuration-ansible/ansible.cfg) = dark gray
COLOR_DEPRECATE(/Users/<redacted>/configuration-ansible/ansible.cfg) = purple
COLOR_DIFF_ADD(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_DIFF_LINES(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_ERROR(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_HIGHLIGHT(/Users/<redacted>/configuration-ansible/ansible.cfg) = white
COLOR_OK(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_SKIP(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_UNREACHABLE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_VERBOSE(/Users/<redacted>/configuration-ansible/ansible.cfg) = blue
COLOR_WARN(/Users/<redacted>/configuration-ansible/ansible.cfg) = bright purple
CONFIG_FILE() = /Users/<redacted>/configuration-ansible/ansible.cfg
DEFAULT_FORKS(/Users/<redacted>/configuration-ansible/ansible.cfg) = 60
DEFAULT_GATHERING(/Users/<redacted>/configuration-ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/inventory.ini']
DEFAULT_LOCAL_TMP(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-55791surhrytk
DEFAULT_LOG_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/ansible.log
DEFAULT_MANAGED_STR(/Users/<redacted>/configuration-ansible/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DEFAULT_MODULE_NAME(/Users/<redacted>/configuration-ansible/ansible.cfg) = shell
DEFAULT_MODULE_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/library']
DEFAULT_ROLES_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/roles']
DEFAULT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/Users/<redacted>/configuration-ansible/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/vault_password.txt
HOST_KEY_CHECKING(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 15
PERSISTENT_CONNECT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/<redacted>/configuration-ansible/ansible.cfg) = never
CACHE:
=====
jsonfile:
________
_timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
_uri(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/factcache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/tmp
```
### OS / Environment
macOS Monterey 12.6.5 (21G531)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy and extract file
ansible.builtin.unarchive:
copy: true
src: "/foo/bar.tar.zst"
dest: "/bar/baz"
owner: root
group: root
```
### Expected Results
I've expected the `unarchive` module to pick the correct unarchiver based on the file ending analogous to how `tar` does it, independently of whether the source archive is on the controller node or on the target node.
### Actual Results
```console
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/<redacted>/configuration-ansible/ansible.cfg as config file
host_list declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
Parsed /Users/<redacted>/configuration-ansible/inventory.ini inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ./.ansible/tmp `"&& mkdir "` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" && echo ansible-tmp-1683129847.767861-56453-260060231192436="` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" ) && sleep 0'"'"''
<somehost> (0, b'ansible-tmp-1683129847.767861-56453-260060231192436=./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436\n', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/stat.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=dxuukajwsxhaooufoilpztdkfumzzovq] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dxuukajwsxhaooufoilpztdkfumzzovq ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (0, b'\r\n\r\n{"changed": false, "stat": {"exists": true, "path": "/bar/baz", "mode": "0700", "isdir": true, "ischr": false, "isblk": false, "isreg": false, "isfifo": false, "islnk": false, "issock": false, "uid": 0, "gid": 0, "size": 4096, "inode": 795747, "dev": 2050, "nlink": 2, "atime": 1683129847.686823, "mtime": 1683129847.686823, "ctime": 1683129847.686823, "wusr": true, "rusr": true, "xusr": true, "wgrp": false, "rgrp": false, "xgrp": false, "woth": false, "roth": false, "xoth": false, "isuid": false, "isgid": false, "blocks": 8, "block_size": 4096, "device_type": 0, "readable": true, "writeable": true, "executable": true, "pw_name": "root", "gr_name": "root", "mimetype": "inode/directory", "charset": "binary", "version": "514766525", "attributes": ["extents"], "attr_flags": "e"}, "invocation": {"module_args": {"path": "/bar/baz", "follow": true, "get_checksum": true, "checksum_algorithm": "sha1", "get_md5": false, "get_mime": true, "get_attributes": true}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> PUT /foo/bar.tar.zst TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /foo/bar.tar.zst ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source && sleep 0'"'"''
<somehost> (0, b'', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/unarchive.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=xvgbopenlqbmbyczywzmbmtpyhqvltbb] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xvgbopenlqbmbyczywzmbmtpyhqvltbb ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (1, b'\r\n\r\n{"failed": true, "msg": "Failed to find handler for \\"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\\". Make sure the required command to extract the file is installed.\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\ntar (child): Error is not recoverable: exiting now\\n/usr/bin/tar: Child returned status 2\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\\nCommand \\"/usr/bin/unzip\\" could not handle archive: End-of-central-directory signature not found. Either this file is not\\n a zipfile, or it constitutes one disk of a multi-part archive. In the\\n latter case the central directory and zipfile comment will be found on\\n the last disk(s) of this archive.\\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\\n\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\n/usr/bin/tar: Error is not recoverable: exiting now\\n", "invocation": {"module_args": {"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source", "dest": "/bar/baz", "owner": "root", "group": "root", "remote_src": false, "list_files": false, "keep_newer": false, "exclude": [], "include": [], "extra_opts": [], "validate_certs": true, "io_buffer_size": 65536, "copy": true, "decrypt": true, "unsafe_writes": false, "creates": null, "mode": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> Failed to connect to the host via ssh: Shared connection to somehost closed.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'rm -f -r ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ > /dev/null 2>&1 && sleep 0'"'"''
<somehost> (0, b'', b'')
somehost | FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"copy": true,
"creates": null,
"decrypt": true,
"dest": "/bar/baz",
"exclude": [],
"extra_opts": [],
"group": "root",
"include": [],
"io_buffer_size": 65536,
"keep_newer": false,
"list_files": false,
"mode": null,
"owner": "root",
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source",
"unsafe_writes": false,
"validate_certs": true
}
},
"msg": "Failed to find handler for \"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\n/usr/bin/tar: Error is not recoverable: exiting now\n"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80710
|
https://github.com/ansible/ansible/pull/80738
|
86e7cd57b745f13a050f0650197a400ed67fb155
|
09b4cae4fb1d3f8ddf6effd8f3841f1e4ed48114
| 2023-05-03T16:13:04Z |
python
| 2023-05-24T15:56:37Z |
test/integration/targets/unarchive/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,710 |
Unarchive module fails for non-zip archives in upload mode with "Failed to find handler"
|
### Summary
When uploading a non-zip archive (a `.tar.zst` in my case) the `unarchive` module fails to detect the correct unarchive handler for the uploaded file. It fails with the message `Failed to find handler for ...`.
As far as I understand it, the temporary file name for the uploaded file is simply `source`, it lacks any kind of extension despite that the source file has a `.tar.zst` extension. Thus, it then can't detect the correct unarchiver (`zstd` in this case), falls back to the `unzip` unarchiver which in turn fails to extract files from a zstandard archive.
I could workaround this issue by changing my role to first copy the archive via the `copy` module and then using the `unarchive` module to extract the archive.
Here is the full error message:
```
Failed to find handler for "./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source". Make sure the required command to extract the file is installed.
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/unzip" could not handle archive: End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source or
./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.ZIP, period.
```
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/Users/<redacted>/configuration-ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/factcache
CACHE_PLUGIN_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
COLOR_CHANGED(/Users/<redacted>/configuration-ansible/ansible.cfg) = yellow
COLOR_DEBUG(/Users/<redacted>/configuration-ansible/ansible.cfg) = dark gray
COLOR_DEPRECATE(/Users/<redacted>/configuration-ansible/ansible.cfg) = purple
COLOR_DIFF_ADD(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_DIFF_LINES(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_ERROR(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_HIGHLIGHT(/Users/<redacted>/configuration-ansible/ansible.cfg) = white
COLOR_OK(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_SKIP(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_UNREACHABLE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_VERBOSE(/Users/<redacted>/configuration-ansible/ansible.cfg) = blue
COLOR_WARN(/Users/<redacted>/configuration-ansible/ansible.cfg) = bright purple
CONFIG_FILE() = /Users/<redacted>/configuration-ansible/ansible.cfg
DEFAULT_FORKS(/Users/<redacted>/configuration-ansible/ansible.cfg) = 60
DEFAULT_GATHERING(/Users/<redacted>/configuration-ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/inventory.ini']
DEFAULT_LOCAL_TMP(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-55791surhrytk
DEFAULT_LOG_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/ansible.log
DEFAULT_MANAGED_STR(/Users/<redacted>/configuration-ansible/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DEFAULT_MODULE_NAME(/Users/<redacted>/configuration-ansible/ansible.cfg) = shell
DEFAULT_MODULE_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/library']
DEFAULT_ROLES_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/roles']
DEFAULT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/Users/<redacted>/configuration-ansible/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/vault_password.txt
HOST_KEY_CHECKING(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 15
PERSISTENT_CONNECT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/<redacted>/configuration-ansible/ansible.cfg) = never
CACHE:
=====
jsonfile:
________
_timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
_uri(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/factcache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/tmp
```
### OS / Environment
macOS Monterey 12.6.5 (21G531)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy and extract file
ansible.builtin.unarchive:
copy: true
src: "/foo/bar.tar.zst"
dest: "/bar/baz"
owner: root
group: root
```
### Expected Results
I've expected the `unarchive` module to pick the correct unarchiver based on the file ending analogous to how `tar` does it, independently of whether the source archive is on the controller node or on the target node.
### Actual Results
```console
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/<redacted>/configuration-ansible/ansible.cfg as config file
host_list declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
Parsed /Users/<redacted>/configuration-ansible/inventory.ini inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ./.ansible/tmp `"&& mkdir "` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" && echo ansible-tmp-1683129847.767861-56453-260060231192436="` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" ) && sleep 0'"'"''
<somehost> (0, b'ansible-tmp-1683129847.767861-56453-260060231192436=./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436\n', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/stat.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=dxuukajwsxhaooufoilpztdkfumzzovq] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dxuukajwsxhaooufoilpztdkfumzzovq ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (0, b'\r\n\r\n{"changed": false, "stat": {"exists": true, "path": "/bar/baz", "mode": "0700", "isdir": true, "ischr": false, "isblk": false, "isreg": false, "isfifo": false, "islnk": false, "issock": false, "uid": 0, "gid": 0, "size": 4096, "inode": 795747, "dev": 2050, "nlink": 2, "atime": 1683129847.686823, "mtime": 1683129847.686823, "ctime": 1683129847.686823, "wusr": true, "rusr": true, "xusr": true, "wgrp": false, "rgrp": false, "xgrp": false, "woth": false, "roth": false, "xoth": false, "isuid": false, "isgid": false, "blocks": 8, "block_size": 4096, "device_type": 0, "readable": true, "writeable": true, "executable": true, "pw_name": "root", "gr_name": "root", "mimetype": "inode/directory", "charset": "binary", "version": "514766525", "attributes": ["extents"], "attr_flags": "e"}, "invocation": {"module_args": {"path": "/bar/baz", "follow": true, "get_checksum": true, "checksum_algorithm": "sha1", "get_md5": false, "get_mime": true, "get_attributes": true}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> PUT /foo/bar.tar.zst TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /foo/bar.tar.zst ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source && sleep 0'"'"''
<somehost> (0, b'', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/unarchive.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=xvgbopenlqbmbyczywzmbmtpyhqvltbb] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xvgbopenlqbmbyczywzmbmtpyhqvltbb ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (1, b'\r\n\r\n{"failed": true, "msg": "Failed to find handler for \\"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\\". Make sure the required command to extract the file is installed.\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\ntar (child): Error is not recoverable: exiting now\\n/usr/bin/tar: Child returned status 2\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\\nCommand \\"/usr/bin/unzip\\" could not handle archive: End-of-central-directory signature not found. Either this file is not\\n a zipfile, or it constitutes one disk of a multi-part archive. In the\\n latter case the central directory and zipfile comment will be found on\\n the last disk(s) of this archive.\\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\\n\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\n/usr/bin/tar: Error is not recoverable: exiting now\\n", "invocation": {"module_args": {"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source", "dest": "/bar/baz", "owner": "root", "group": "root", "remote_src": false, "list_files": false, "keep_newer": false, "exclude": [], "include": [], "extra_opts": [], "validate_certs": true, "io_buffer_size": 65536, "copy": true, "decrypt": true, "unsafe_writes": false, "creates": null, "mode": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> Failed to connect to the host via ssh: Shared connection to somehost closed.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'rm -f -r ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ > /dev/null 2>&1 && sleep 0'"'"''
<somehost> (0, b'', b'')
somehost | FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"copy": true,
"creates": null,
"decrypt": true,
"dest": "/bar/baz",
"exclude": [],
"extra_opts": [],
"group": "root",
"include": [],
"io_buffer_size": 65536,
"keep_newer": false,
"list_files": false,
"mode": null,
"owner": "root",
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source",
"unsafe_writes": false,
"validate_certs": true
}
},
"msg": "Failed to find handler for \"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\n/usr/bin/tar: Error is not recoverable: exiting now\n"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80710
|
https://github.com/ansible/ansible/pull/80738
|
86e7cd57b745f13a050f0650197a400ed67fb155
|
09b4cae4fb1d3f8ddf6effd8f3841f1e4ed48114
| 2023-05-03T16:13:04Z |
python
| 2023-05-24T15:56:37Z |
test/integration/targets/unarchive/runme.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,710 |
Unarchive module fails for non-zip archives in upload mode with "Failed to find handler"
|
### Summary
When uploading a non-zip archive (a `.tar.zst` in my case) the `unarchive` module fails to detect the correct unarchive handler for the uploaded file. It fails with the message `Failed to find handler for ...`.
As far as I understand it, the temporary file name for the uploaded file is simply `source`, it lacks any kind of extension despite that the source file has a `.tar.zst` extension. Thus, it then can't detect the correct unarchiver (`zstd` in this case), falls back to the `unzip` unarchiver which in turn fails to extract files from a zstandard archive.
I could workaround this issue by changing my role to first copy the archive via the `copy` module and then using the `unarchive` module to extract the archive.
Here is the full error message:
```
Failed to find handler for "./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source". Make sure the required command to extract the file is installed.
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/tar" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source: Cannot open: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
Command "/usr/bin/unzip" could not handle archive: End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source or
./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683126580.057546-55071-11471984781561/source.ZIP, period.
```
### Issue Type
Bug Report
### Component Name
unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(/Users/<redacted>/configuration-ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/factcache
CACHE_PLUGIN_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
COLOR_CHANGED(/Users/<redacted>/configuration-ansible/ansible.cfg) = yellow
COLOR_DEBUG(/Users/<redacted>/configuration-ansible/ansible.cfg) = dark gray
COLOR_DEPRECATE(/Users/<redacted>/configuration-ansible/ansible.cfg) = purple
COLOR_DIFF_ADD(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_DIFF_LINES(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_ERROR(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_HIGHLIGHT(/Users/<redacted>/configuration-ansible/ansible.cfg) = white
COLOR_OK(/Users/<redacted>/configuration-ansible/ansible.cfg) = green
COLOR_SKIP(/Users/<redacted>/configuration-ansible/ansible.cfg) = cyan
COLOR_UNREACHABLE(/Users/<redacted>/configuration-ansible/ansible.cfg) = red
COLOR_VERBOSE(/Users/<redacted>/configuration-ansible/ansible.cfg) = blue
COLOR_WARN(/Users/<redacted>/configuration-ansible/ansible.cfg) = bright purple
CONFIG_FILE() = /Users/<redacted>/configuration-ansible/ansible.cfg
DEFAULT_FORKS(/Users/<redacted>/configuration-ansible/ansible.cfg) = 60
DEFAULT_GATHERING(/Users/<redacted>/configuration-ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/inventory.ini']
DEFAULT_LOCAL_TMP(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-55791surhrytk
DEFAULT_LOG_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/ansible.log
DEFAULT_MANAGED_STR(/Users/<redacted>/configuration-ansible/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DEFAULT_MODULE_NAME(/Users/<redacted>/configuration-ansible/ansible.cfg) = shell
DEFAULT_MODULE_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/library']
DEFAULT_ROLES_PATH(/Users/<redacted>/configuration-ansible/ansible.cfg) = ['/Users/<redacted>/configuration-ansible/roles']
DEFAULT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
DEFAULT_TRANSPORT(/Users/<redacted>/configuration-ansible/ansible.cfg) = ssh
DEFAULT_VAULT_PASSWORD_FILE(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/vault_password.txt
HOST_KEY_CHECKING(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
PERSISTENT_CONNECT_RETRY_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 15
PERSISTENT_CONNECT_TIMEOUT(/Users/<redacted>/configuration-ansible/ansible.cfg) = 30
RETRY_FILES_ENABLED(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/<redacted>/configuration-ansible/ansible.cfg) = never
CACHE:
=====
jsonfile:
________
_timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 86400
_uri(/Users/<redacted>/configuration-ansible/ansible.cfg) = /Users/<redacted>/configuration-ansible/.ansible/factcache
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
ssh:
___
host_key_checking(/Users/<redacted>/configuration-ansible/ansible.cfg) = False
timeout(/Users/<redacted>/configuration-ansible/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/Users/<redacted>/configuration-ansible/ansible.cfg) = ./.ansible/tmp
```
### OS / Environment
macOS Monterey 12.6.5 (21G531)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Copy and extract file
ansible.builtin.unarchive:
copy: true
src: "/foo/bar.tar.zst"
dest: "/bar/baz"
owner: root
group: root
```
### Expected Results
I've expected the `unarchive` module to pick the correct unarchiver based on the file ending analogous to how `tar` does it, independently of whether the source archive is on the controller node or on the target node.
### Actual Results
```console
ansible [core 2.14.4]
config file = /Users/<redacted>/configuration-ansible/ansible.cfg
configured module search path = ['/Users/<redacted>/configuration-ansible/library']
ansible python module location = /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/<redacted>/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.3 (main, Apr 7 2023, 21:05:46) [Clang 14.0.0 (clang-1400.0.29.202)] (/opt/homebrew/Cellar/ansible/7.4.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Using /Users/<redacted>/configuration-ansible/ansible.cfg as config file
host_list declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
script declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
auto declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
yaml declined parsing /Users/<redacted>/configuration-ansible/inventory.ini as it did not pass its verify_file() method
Parsed /Users/<redacted>/configuration-ansible/inventory.ini inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ./.ansible/tmp `"&& mkdir "` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" && echo ansible-tmp-1683129847.767861-56453-260060231192436="` echo ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436 `" ) && sleep 0'"'"''
<somehost> (0, b'ansible-tmp-1683129847.767861-56453-260060231192436=./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436\n', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/stat.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpgqtp9xr0 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=dxuukajwsxhaooufoilpztdkfumzzovq] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-dxuukajwsxhaooufoilpztdkfumzzovq ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (0, b'\r\n\r\n{"changed": false, "stat": {"exists": true, "path": "/bar/baz", "mode": "0700", "isdir": true, "ischr": false, "isblk": false, "isreg": false, "isfifo": false, "islnk": false, "issock": false, "uid": 0, "gid": 0, "size": 4096, "inode": 795747, "dev": 2050, "nlink": 2, "atime": 1683129847.686823, "mtime": 1683129847.686823, "ctime": 1683129847.686823, "wusr": true, "rusr": true, "xusr": true, "wgrp": false, "rgrp": false, "xgrp": false, "woth": false, "roth": false, "xoth": false, "isuid": false, "isgid": false, "blocks": 8, "block_size": 4096, "device_type": 0, "readable": true, "writeable": true, "executable": true, "pw_name": "root", "gr_name": "root", "mimetype": "inode/directory", "charset": "binary", "version": "514766525", "attributes": ["extents"], "attr_flags": "e"}, "invocation": {"module_args": {"path": "/bar/baz", "follow": true, "get_checksum": true, "checksum_algorithm": "sha1", "get_md5": false, "get_mime": true, "get_attributes": true}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> PUT /foo/bar.tar.zst TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /foo/bar.tar.zst ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source && sleep 0'"'"''
<somehost> (0, b'', b'')
Using module file /opt/homebrew/Cellar/ansible/7.4.0/libexec/lib/python3.11/site-packages/ansible/modules/unarchive.py
<somehost> PUT /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 TO ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py
<somehost> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' '[somehost]'
<somehost> (0, b'sftp> put /Users/<redacted>/configuration-ansible/.ansible/tmp/ansible-local-56403h5fsr_c5/tmpq44m6md8 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py\n', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'chmod u+x ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py && sleep 0'"'"''
<somehost> (0, b'', b'')
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' -tt somehost '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=xvgbopenlqbmbyczywzmbmtpyhqvltbb] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xvgbopenlqbmbyczywzmbmtpyhqvltbb ; /usr/bin/python3 ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/AnsiballZ_unarchive.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<somehost> (1, b'\r\n\r\n{"failed": true, "msg": "Failed to find handler for \\"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\\". Make sure the required command to extract the file is installed.\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\ntar (child): Error is not recoverable: exiting now\\n/usr/bin/tar: Child returned status 2\\n/usr/bin/tar: Error is not recoverable: exiting now\\n\\nCommand \\"/usr/bin/unzip\\" could not handle archive: End-of-central-directory signature not found. Either this file is not\\n a zipfile, or it constitutes one disk of a multi-part archive. In the\\n latter case the central directory and zipfile comment will be found on\\n the last disk(s) of this archive.\\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\\n\\nCommand \\"/usr/bin/tar\\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\\n/usr/bin/tar: Error is not recoverable: exiting now\\n", "invocation": {"module_args": {"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source", "dest": "/bar/baz", "owner": "root", "group": "root", "remote_src": false, "list_files": false, "keep_newer": false, "exclude": [], "include": [], "extra_opts": [], "validate_certs": true, "io_buffer_size": 65536, "copy": true, "decrypt": true, "unsafe_writes": false, "creates": null, "mode": null, "seuser": null, "serole": null, "selevel": null, "setype": null, "attributes": null}}}\r\n', b'Shared connection to somehost closed.\r\n')
<somehost> Failed to connect to the host via ssh: Shared connection to somehost closed.
<somehost> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<somehost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ubuntu"' -o ConnectTimeout=10 -o 'ControlPath="/Users/<redacted>/.ansible/cp/3d6108e6c3"' somehost '/bin/sh -c '"'"'rm -f -r ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/ > /dev/null 2>&1 && sleep 0'"'"''
<somehost> (0, b'', b'')
somehost | FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"copy": true,
"creates": null,
"decrypt": true,
"dest": "/bar/baz",
"exclude": [],
"extra_opts": [],
"group": "root",
"include": [],
"io_buffer_size": 65536,
"keep_newer": false,
"list_files": false,
"mode": null,
"owner": "root",
"remote_src": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source",
"unsafe_writes": false,
"validate_certs": true
}
},
"msg": "Failed to find handler for \"./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: tar (child): ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\ntar (child): Error is not recoverable: exiting now\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nnote: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source may be a plain executable, not an archive\nunzip: cannot find zipfile directory in one of ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source or\n ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.zip, and cannot find ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: ./.ansible/tmp/ansible-tmp-1683129847.767861-56453-260060231192436/source: Cannot open: No such file or directory\n/usr/bin/tar: Error is not recoverable: exiting now\n"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80710
|
https://github.com/ansible/ansible/pull/80738
|
86e7cd57b745f13a050f0650197a400ed67fb155
|
09b4cae4fb1d3f8ddf6effd8f3841f1e4ed48114
| 2023-05-03T16:13:04Z |
python
| 2023-05-24T15:56:37Z |
test/integration/targets/unarchive/test_relative_tmp_dir.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,063 |
ansible.builtin.apt_key adds keys to a deprecated location
|
### Summary
When adding non-PPA repositories to Ubuntu 22.04 (Jammy) using `ansible.builtin.apt_key` the key is added to a deprecated location (`/etc/apt/trusted.gpg`). I've seen a few reported issues where this was fixed for `ansible.builtin.apt_repository` but that only applies to PPAs which you can't always use (ie: Docker does not provide an official PPA)
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /Users/ahrenstein/.ansible.cfg
configured module search path = ['/Users/ahrenstein/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/5.9.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/ahrenstein/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.4 (main, Apr 26 2022, 19:36:29) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_REMOTE_USER(/Users/ahrenstein/.ansible.cfg) = root
HOST_KEY_CHECKING(/Users/ahrenstein/.ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/ahrenstein/.ansible.cfg) = False
remote_user(/Users/ahrenstein/.ansible.cfg) = root
ssh:
___
host_key_checking(/Users/ahrenstein/.ansible.cfg) = False
remote_user(/Users/ahrenstein/.ansible.cfg) = root
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ansible Host: macOS 12.4 (ARM)
Target OS: Ubuntu 22.04 (x86_64)
### Steps to Reproduce
Deploy the Docker GPG key and repo
```yaml
- name: Add the Docker repository GPG key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
when: ansible_distribution == 'Ubuntu'
- name: (Ubuntu) Add the Docker repository
apt_repository:
repo: 'deb https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable'
state: present
- name: Refresh apt cache
apt:
update-cache: yes
```
Run `apt-get update` on the server
### Expected Results
apt cache refreshes without deprecation warnings
### Actual Results
```console
$ apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease
Get:3 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
Hit:4 http://us.archive.ubuntu.com/ubuntu jammy-security InRelease
Hit:5 https://download.docker.com/linux/ubuntu jammy InRelease
Fetched 99.8 kB in 0s (247 kB/s)
Reading package lists... Done
W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78063
|
https://github.com/ansible/ansible/pull/80872
|
9f4dfff69bfc9f33a487e1c7fee2fbce64c62c9c
|
0775e991d51e2fe9c38a4d862cd32a9f704d4915
| 2022-06-15T14:50:01Z |
python
| 2023-05-25T15:37:59Z |
lib/ansible/modules/apt_key.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2012, Jayson Vantuyl <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_key
author:
- Jayson Vantuyl (@jvantuyl)
version_added: "1.0"
short_description: Add or remove an apt key
description:
- Add or remove an I(apt) key, optionally downloading it.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: debian
notes:
- The apt-key command has been deprecated and suggests to 'manage keyring files in trusted.gpg.d instead'. See the Debian wiki for details.
This module is kept for backwards compatibility for systems that still use apt-key as the main way to manage apt repository keys.
- As a sanity check, downloaded key id must match the one specified.
- "Use full fingerprint (40 characters) key ids to avoid key collisions.
To generate a full-fingerprint imported key: C(apt-key adv --list-public-keys --with-fingerprint --with-colons)."
- If you specify both the key id and the URL with C(state=present), the task can verify or add the key as needed.
- Adding a new key requires an apt cache update (e.g. using the M(ansible.builtin.apt) module's update_cache option).
requirements:
- gpg
options:
id:
description:
- The identifier of the key.
- Including this allows check mode to correctly report the changed state.
- If specifying a subkey's id be aware that apt-key does not understand how to remove keys via a subkey id. Specify the primary key's id instead.
- This parameter is required when C(state) is set to C(absent).
type: str
data:
description:
- The keyfile contents to add to the keyring.
type: str
file:
description:
- The path to a keyfile on the remote server to add to the keyring.
type: path
keyring:
description:
- The full path to specific keyring file in C(/etc/apt/trusted.gpg.d/).
type: path
version_added: "1.3"
url:
description:
- The URL to retrieve key from.
type: str
keyserver:
description:
- The keyserver to retrieve key from.
type: str
version_added: "1.6"
state:
description:
- Ensures that the key is present (added) or absent (revoked).
type: str
choices: [ absent, present ]
default: present
validate_certs:
description:
- If C(false), SSL certificates for the target url will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
'''
EXAMPLES = '''
- name: One way to avoid apt_key once it is removed from your distro
block:
- name: somerepo |no apt key
ansible.builtin.get_url:
url: https://download.example.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/somerepo.asc
- name: somerepo | apt source
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
- name: Add an apt key by id from a keyserver
ansible.builtin.apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: Add an Apt signing key, uses whichever key is at the URL
ansible.builtin.apt_key:
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Add an Apt signing key, will not download if present
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Remove a Apt specific signing key, leading 0x is valid
ansible.builtin.apt_key:
id: 0x9FED2BCBDCD29CDF762678CBAED4B06F473041FA
state: absent
# Use armored file since utf-8 string is expected. Must be of "PGP PUBLIC KEY BLOCK" type.
- name: Add a key from a file on the Ansible server
ansible.builtin.apt_key:
data: "{{ lookup('ansible.builtin.file', 'apt.asc') }}"
state: present
- name: Add an Apt signing key to a specific keyring file
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
keyring: /etc/apt/trusted.gpg.d/debian.gpg
- name: Add Apt signing key on remote server to keyring
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
file: /tmp/apt.gpg
state: present
'''
RETURN = '''
after:
description: List of apt key ids or fingerprints after any modification
returned: on change
type: list
sample: ["D8576A8BA88D21E9", "3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
before:
description: List of apt key ids or fingprints before any modifications
returned: always
type: list
sample: ["3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
fp:
description: Fingerprint of the key to import
returned: always
type: str
sample: "D8576A8BA88D21E9"
id:
description: key id from source
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
key_id:
description: calculated key id, it should be same as 'id', but can be different
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
short_id:
description: calculated short key id
returned: always
type: str
sample: "A88D21E9"
'''
import os
# FIXME: standardize into module_common
from traceback import format_exc
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_url
apt_key_bin = None
gpg_bin = None
locale = None
def lang_env(module):
if not hasattr(lang_env, 'result'):
locale = get_best_parsable_locale(module)
lang_env.result = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
return lang_env.result
def find_needed_binaries(module):
global apt_key_bin
global gpg_bin
apt_key_bin = module.get_bin_path('apt-key', required=True)
gpg_bin = module.get_bin_path('gpg', required=True)
def add_http_proxy(cmd):
for envvar in ('HTTPS_PROXY', 'https_proxy', 'HTTP_PROXY', 'http_proxy'):
proxy = os.environ.get(envvar)
if proxy:
break
if proxy:
cmd += ' --keyserver-options http-proxy=%s' % proxy
return cmd
def parse_key_id(key_id):
"""validate the key_id and break it into segments
:arg key_id: The key_id as supplied by the user. A valid key_id will be
8, 16, or more hexadecimal chars with an optional leading ``0x``.
:returns: The portion of key_id suitable for apt-key del, the portion
suitable for comparisons with --list-public-keys, and the portion that
can be used with --recv-key. If key_id is long enough, these will be
the last 8 characters of key_id, the last 16 characters, and all of
key_id. If key_id is not long enough, some of the values will be the
same.
* apt-key del <= 1.10 has a bug with key_id != 8 chars
* apt-key adv --list-public-keys prints 16 chars
* apt-key adv --recv-key can take more chars
"""
# Make sure the key_id is valid hexadecimal
int(to_native(key_id), 16)
key_id = key_id.upper()
if key_id.startswith('0X'):
key_id = key_id[2:]
key_id_len = len(key_id)
if (key_id_len != 8 and key_id_len != 16) and key_id_len <= 16:
raise ValueError('key_id must be 8, 16, or 16+ hexadecimal characters in length')
short_key_id = key_id[-8:]
fingerprint = key_id
if key_id_len > 16:
fingerprint = key_id[-16:]
return short_key_id, fingerprint, key_id
def parse_output_for_keys(output, short_format=False):
found = []
lines = to_native(output).split('\n')
for line in lines:
if (line.startswith("pub") or line.startswith("sub")) and "expired" not in line:
try:
# apt key format
tokens = line.split()
code = tokens[1]
(len_type, real_code) = code.split("/")
except (IndexError, ValueError):
# gpg format
try:
tokens = line.split(':')
real_code = tokens[4]
except (IndexError, ValueError):
# invalid line, skip
continue
found.append(real_code)
if found and short_format:
found = shorten_key_ids(found)
return found
def all_keys(module, keyring, short_format):
if keyring is not None:
cmd = "%s --keyring %s adv --list-public-keys --keyid-format=long" % (apt_key_bin, keyring)
else:
cmd = "%s adv --list-public-keys --keyid-format=long" % apt_key_bin
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(msg="Unable to list public keys", cmd=cmd, rc=rc, stdout=out, stderr=err)
return parse_output_for_keys(out, short_format)
def shorten_key_ids(key_id_list):
"""
Takes a list of key ids, and converts them to the 'short' format,
by reducing them to their last 8 characters.
"""
short = []
for key in key_id_list:
short.append(key[-8:])
return short
def download_key(module, url):
try:
# note: validate_certs and other args are pulled from module directly
rsp, info = fetch_url(module, url, use_proxy=True)
if info['status'] != 200:
module.fail_json(msg="Failed to download key at %s: %s" % (url, info['msg']))
return rsp.read()
except Exception:
module.fail_json(msg="error getting key id from url: %s" % url, traceback=format_exc())
def get_key_id_from_file(module, filename, data=None):
native_data = to_native(data)
is_armored = native_data.find("-----BEGIN PGP PUBLIC KEY BLOCK-----") >= 0
key = None
cmd = [gpg_bin, '--with-colons', filename]
(rc, out, err) = module.run_command(cmd, environ_update=lang_env(module), data=(native_data if is_armored else data), binary_data=not is_armored)
if rc != 0:
module.fail_json(msg="Unable to extract key from '%s'" % ('inline data' if data is not None else filename), stdout=out, stderr=err)
keys = parse_output_for_keys(out)
# assume we only want first key?
if keys:
key = keys[0]
return key
def get_key_id_from_data(module, data):
return get_key_id_from_file(module, '-', data)
def import_key(module, keyring, keyserver, key_id):
if keyring:
cmd = "%s --keyring %s adv --no-tty --keyserver %s" % (apt_key_bin, keyring, keyserver)
else:
cmd = "%s adv --no-tty --keyserver %s" % (apt_key_bin, keyserver)
# check for proxy
cmd = add_http_proxy(cmd)
# add recv argument as last one
cmd = "%s --recv %s" % (cmd, key_id)
for retry in range(5):
(rc, out, err) = module.run_command(cmd, environ_update=lang_env(module))
if rc == 0:
break
else:
# Out of retries
if rc == 2 and 'not found on keyserver' in out:
msg = 'Key %s not found on keyserver %s' % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env(module))
else:
msg = "Error fetching key %s from keyserver: %s" % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env(module), rc=rc, stdout=out, stderr=err)
return True
def add_key(module, keyfile, keyring, data=None):
if data is not None:
if keyring:
cmd = "%s --keyring %s add -" % (apt_key_bin, keyring)
else:
cmd = "%s add -" % apt_key_bin
(rc, out, err) = module.run_command(cmd, data=data, binary_data=True)
if rc != 0:
module.fail_json(
msg="Unable to add a key from binary data",
cmd=cmd,
rc=rc,
stdout=out,
stderr=err,
)
else:
if keyring:
cmd = "%s --keyring %s add %s" % (apt_key_bin, keyring, keyfile)
else:
cmd = "%s add %s" % (apt_key_bin, keyfile)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to add a key from file %s" % (keyfile),
cmd=cmd,
rc=rc,
keyfile=keyfile,
stdout=out,
stderr=err,
)
return True
def remove_key(module, key_id, keyring):
if keyring:
cmd = '%s --keyring %s del %s' % (apt_key_bin, keyring, key_id)
else:
cmd = '%s del %s' % (apt_key_bin, key_id)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to remove a key with id %s" % (key_id),
cmd=cmd,
rc=rc,
key_id=key_id,
stdout=out,
stderr=err,
)
return True
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str'),
url=dict(type='str'),
data=dict(type='str'),
file=dict(type='path'),
keyring=dict(type='path'),
validate_certs=dict(type='bool', default=True),
keyserver=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
),
supports_check_mode=True,
mutually_exclusive=(('data', 'file', 'keyserver', 'url'),),
)
# parameters
key_id = module.params['id']
url = module.params['url']
data = module.params['data']
filename = module.params['file']
keyring = module.params['keyring']
state = module.params['state']
keyserver = module.params['keyserver']
# internal vars
short_format = False
short_key_id = None
fingerprint = None
error_no_error = "apt-key did not return an error, but %s (check that the id is correct and *not* a subkey)"
# ensure we have requirements met
find_needed_binaries(module)
# initialize result dict
r = {'changed': False}
if not key_id:
if keyserver:
module.fail_json(msg="Missing key_id, required with keyserver.")
if url:
data = download_key(module, url)
if filename:
key_id = get_key_id_from_file(module, filename)
elif data:
key_id = get_key_id_from_data(module, data)
r['id'] = key_id
try:
short_key_id, fingerprint, key_id = parse_key_id(key_id)
r['short_id'] = short_key_id
r['fp'] = fingerprint
r['key_id'] = key_id
except ValueError:
module.fail_json(msg='Invalid key_id', **r)
if not fingerprint:
# invalid key should fail well before this point, but JIC ...
module.fail_json(msg="Unable to continue as we could not extract a valid fingerprint to compare against existing keys.", **r)
if len(key_id) == 8:
short_format = True
# get existing keys to verify if we need to change
r['before'] = keys = all_keys(module, keyring, short_format)
keys2 = []
if state == 'present':
if (short_format and short_key_id not in keys) or (not short_format and fingerprint not in keys):
r['changed'] = True
if not module.check_mode:
if filename:
add_key(module, filename, keyring)
elif keyserver:
import_key(module, keyring, keyserver, key_id)
elif data:
# this also takes care of url if key_id was not provided
add_key(module, "-", keyring, data)
elif url:
# we hit this branch only if key_id is supplied with url
data = download_key(module, url)
add_key(module, "-", keyring, data)
else:
module.fail_json(msg="No key to add ... how did i get here?!?!", **r)
# verify it got added
r['after'] = keys2 = all_keys(module, keyring, short_format)
if (short_format and short_key_id not in keys2) or (not short_format and fingerprint not in keys2):
module.fail_json(msg=error_no_error % 'failed to add the key', **r)
elif state == 'absent':
if not key_id:
module.fail_json(msg="key is required to remove a key", **r)
if fingerprint in keys:
r['changed'] = True
if not module.check_mode:
# we use the "short" id: key_id[-8:], short_format=True
# it's a workaround for https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1481871
if short_key_id is not None and remove_key(module, short_key_id, keyring):
r['after'] = keys2 = all_keys(module, keyring, short_format)
if fingerprint in keys2:
module.fail_json(msg=error_no_error % 'the key was not removed', **r)
else:
module.fail_json(msg="error removing key_id", **r)
module.exit_json(**r)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,063 |
ansible.builtin.apt_key adds keys to a deprecated location
|
### Summary
When adding non-PPA repositories to Ubuntu 22.04 (Jammy) using `ansible.builtin.apt_key` the key is added to a deprecated location (`/etc/apt/trusted.gpg`). I've seen a few reported issues where this was fixed for `ansible.builtin.apt_repository` but that only applies to PPAs which you can't always use (ie: Docker does not provide an official PPA)
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /Users/ahrenstein/.ansible.cfg
configured module search path = ['/Users/ahrenstein/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/5.9.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/ahrenstein/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.10.4 (main, Apr 26 2022, 19:36:29) [Clang 13.1.6 (clang-1316.0.21.2)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
DEFAULT_REMOTE_USER(/Users/ahrenstein/.ansible.cfg) = root
HOST_KEY_CHECKING(/Users/ahrenstein/.ansible.cfg) = False
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/Users/ahrenstein/.ansible.cfg) = False
remote_user(/Users/ahrenstein/.ansible.cfg) = root
ssh:
___
host_key_checking(/Users/ahrenstein/.ansible.cfg) = False
remote_user(/Users/ahrenstein/.ansible.cfg) = root
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ansible Host: macOS 12.4 (ARM)
Target OS: Ubuntu 22.04 (x86_64)
### Steps to Reproduce
Deploy the Docker GPG key and repo
```yaml
- name: Add the Docker repository GPG key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
when: ansible_distribution == 'Ubuntu'
- name: (Ubuntu) Add the Docker repository
apt_repository:
repo: 'deb https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable'
state: present
- name: Refresh apt cache
apt:
update-cache: yes
```
Run `apt-get update` on the server
### Expected Results
apt cache refreshes without deprecation warnings
### Actual Results
```console
$ apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu jammy-updates InRelease
Get:3 http://us.archive.ubuntu.com/ubuntu jammy-backports InRelease [99.8 kB]
Hit:4 http://us.archive.ubuntu.com/ubuntu jammy-security InRelease
Hit:5 https://download.docker.com/linux/ubuntu jammy InRelease
Fetched 99.8 kB in 0s (247 kB/s)
Reading package lists... Done
W: https://download.docker.com/linux/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78063
|
https://github.com/ansible/ansible/pull/80872
|
9f4dfff69bfc9f33a487e1c7fee2fbce64c62c9c
|
0775e991d51e2fe9c38a4d862cd32a9f704d4915
| 2022-06-15T14:50:01Z |
python
| 2023-05-25T15:37:59Z |
lib/ansible/modules/apt_repository.py
|
# encoding: utf-8
# Copyright: (c) 2012, Matt Wright <[email protected]>
# Copyright: (c) 2013, Alexander Saltanov <[email protected]>
# Copyright: (c) 2014, Rutger Spiertz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_repository
short_description: Add and remove APT repositories
description:
- Add or remove an APT repositories in Ubuntu and Debian.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- This module supports Debian Squeeze (version 6) as well as its successors and derivatives.
options:
repo:
description:
- A source string for the repository.
type: str
required: true
state:
description:
- A source string state.
type: str
choices: [ absent, present ]
default: "present"
mode:
description:
- The octal mode for newly created files in sources.list.d.
- Default is what system uses (probably 0644).
type: raw
version_added: "1.6"
update_cache:
description:
- Run the equivalent of C(apt-get update) when a change occurs. Cache updates are run after making changes.
type: bool
default: "yes"
aliases: [ update-cache ]
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
validate_certs:
description:
- If C(false), SSL certificates for the target repo will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
version_added: '1.8'
filename:
description:
- Sets the name of the source list file in sources.list.d.
Defaults to a file name based on the repository source url.
The .list extension will be automatically added.
type: str
version_added: '2.1'
codename:
description:
- Override the distribution codename to use for PPA repositories.
Should usually only be set when working with a PPA on
a non-Ubuntu target (for example, Debian or Mint).
type: str
version_added: '2.3'
install_python_apt:
description:
- Whether to automatically try to install the Python apt library or not, if it is not already installed.
Without this library, the module does not work.
- Runs C(apt-get install python-apt) for Python 2, and C(apt-get install python3-apt) for Python 3.
- Only works with the system Python 2 or Python 3. If you are using a Python on the remote that is not
the system Python, set I(install_python_apt=false) and ensure that the Python apt library
for your Python version is installed some other way.
type: bool
default: true
author:
- Alexander Saltanov (@sashka)
version_added: "0.7"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- apt-key or gpg
'''
EXAMPLES = '''
- name: Add specified repository into sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Add specified repository into sources list using specified filename
ansible.builtin.apt_repository:
repo: deb http://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
- name: Add source repository into sources list
ansible.builtin.apt_repository:
repo: deb-src http://archive.canonical.com/ubuntu hardy partner
state: present
- name: Remove specified repository from sources list
ansible.builtin.apt_repository:
repo: deb http://archive.canonical.com/ubuntu hardy partner
state: absent
- name: Add nginx stable repository from PPA and install its signing key on Ubuntu target
ansible.builtin.apt_repository:
repo: ppa:nginx/stable
- name: Add nginx stable repository from PPA and install its signing key on Debian target
ansible.builtin.apt_repository:
repo: 'ppa:nginx/stable'
codename: trusty
- name: One way to avoid apt_key once it is removed from your distro
block:
- name: somerepo |no apt key
ansible.builtin.get_url:
url: https://download.example.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/somerepo.asc
- name: somerepo | apt source
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
'''
RETURN = '''
repo:
description: A source string for the repository
returned: always
type: str
sample: "deb https://artifacts.elastic.co/packages/6.x/apt stable main"
sources_added:
description: List of sources added
returned: success, sources were added
type: list
sample: ["/etc/apt/sources.list.d/artifacts_elastic_co_packages_6_x_apt.list"]
version_added: "2.15"
sources_removed:
description: List of sources removed
returned: success, sources were removed
type: list
sample: ["/etc/apt/sources.list.d/artifacts_elastic_co_packages_6_x_apt.list"]
version_added: "2.15"
'''
import copy
import glob
import json
import os
import re
import sys
import tempfile
import random
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.common.locale import get_best_parsable_locale
try:
import apt
import apt_pkg
import aptsources.distro as aptsources_distro
distro = aptsources_distro.get_distro()
HAVE_PYTHON_APT = True
except ImportError:
apt = apt_pkg = aptsources_distro = distro = None
HAVE_PYTHON_APT = False
APT_KEY_DIRS = ['/etc/apt/keyrings', '/etc/apt/trusted.gpg.d', '/usr/share/keyrings']
DEFAULT_SOURCES_PERM = 0o0644
VALID_SOURCE_TYPES = ('deb', 'deb-src')
def install_python_apt(module, apt_pkg_name):
if not module.check_mode:
apt_get_path = module.get_bin_path('apt-get')
if apt_get_path:
rc, so, se = module.run_command([apt_get_path, 'update'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
rc, so, se = module.run_command([apt_get_path, 'install', apt_pkg_name, '-y', '-q'])
if rc != 0:
module.fail_json(msg="Failed to auto-install %s. Error was: '%s'" % (apt_pkg_name, se.strip()))
else:
module.fail_json(msg="%s must be installed to use check mode" % apt_pkg_name)
class InvalidSource(Exception):
pass
# Simple version of aptsources.sourceslist.SourcesList.
# No advanced logic and no backups inside.
class SourcesList(object):
def __init__(self, module):
self.module = module
self.files = {} # group sources by file
# Repositories that we're adding -- used to implement mode param
self.new_repos = set()
self.default_file = self._apt_cfg_file('Dir::Etc::sourcelist')
# read sources.list if it exists
if os.path.isfile(self.default_file):
self.load(self.default_file)
# read sources.list.d
for file in glob.iglob('%s/*.list' % self._apt_cfg_dir('Dir::Etc::sourceparts')):
self.load(file)
def __iter__(self):
'''Simple iterator to go over all sources. Empty, non-source, and other not valid lines will be skipped.'''
for file, sources in self.files.items():
for n, valid, enabled, source, comment in sources:
if valid:
yield file, n, enabled, source, comment
def _expand_path(self, filename):
if '/' in filename:
return filename
else:
return os.path.abspath(os.path.join(self._apt_cfg_dir('Dir::Etc::sourceparts'), filename))
def _suggest_filename(self, line):
def _cleanup_filename(s):
filename = self.module.params['filename']
if filename is not None:
return filename
return '_'.join(re.sub('[^a-zA-Z0-9]', ' ', s).split())
def _strip_username_password(s):
if '@' in s:
s = s.split('@', 1)
s = s[-1]
return s
# Drop options and protocols.
line = re.sub(r'\[[^\]]+\]', '', line)
line = re.sub(r'\w+://', '', line)
# split line into valid keywords
parts = [part for part in line.split() if part not in VALID_SOURCE_TYPES]
# Drop usernames and passwords
parts[0] = _strip_username_password(parts[0])
return '%s.list' % _cleanup_filename(' '.join(parts[:1]))
def _parse(self, line, raise_if_invalid_or_disabled=False):
valid = False
enabled = True
source = ''
comment = ''
line = line.strip()
if line.startswith('#'):
enabled = False
line = line[1:]
# Check for another "#" in the line and treat a part after it as a comment.
i = line.find('#')
if i > 0:
comment = line[i + 1:].strip()
line = line[:i]
# Split a source into substring to make sure that it is source spec.
# Duplicated whitespaces in a valid source spec will be removed.
source = line.strip()
if source:
chunks = source.split()
if chunks[0] in VALID_SOURCE_TYPES:
valid = True
source = ' '.join(chunks)
if raise_if_invalid_or_disabled and (not valid or not enabled):
raise InvalidSource(line)
return valid, enabled, source, comment
@staticmethod
def _apt_cfg_file(filespec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_file(filespec)
except AttributeError:
result = apt_pkg.Config.FindFile(filespec)
return result
@staticmethod
def _apt_cfg_dir(dirspec):
'''
Wrapper for `apt_pkg` module for running with Python 2.5
'''
try:
result = apt_pkg.config.find_dir(dirspec)
except AttributeError:
result = apt_pkg.Config.FindDir(dirspec)
return result
def load(self, file):
group = []
f = open(file, 'r')
for n, line in enumerate(f):
valid, enabled, source, comment = self._parse(line)
group.append((n, valid, enabled, source, comment))
self.files[file] = group
def save(self):
for filename, sources in list(self.files.items()):
if sources:
d, fn = os.path.split(filename)
try:
os.makedirs(d)
except OSError as ex:
if not os.path.isdir(d):
self.module.fail_json("Failed to create directory %s: %s" % (d, to_native(ex)))
try:
fd, tmp_path = tempfile.mkstemp(prefix=".%s-" % fn, dir=d)
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to create temp file at "%s" for apt source: %s' % (d, to_native(e)))
f = os.fdopen(fd, 'w')
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
line = ''.join(chunks)
try:
f.write(line)
except IOError as ex:
self.module.fail_json(msg="Failed to write to file %s: %s" % (tmp_path, to_native(ex)))
self.module.atomic_move(tmp_path, filename)
# allow the user to override the default mode
if filename in self.new_repos:
this_mode = self.module.params.get('mode', DEFAULT_SOURCES_PERM)
self.module.set_mode_if_different(filename, this_mode, False)
else:
del self.files[filename]
if os.path.exists(filename):
os.remove(filename)
def dump(self):
dumpstruct = {}
for filename, sources in self.files.items():
if sources:
lines = []
for n, valid, enabled, source, comment in sources:
chunks = []
if not enabled:
chunks.append('# ')
chunks.append(source)
if comment:
chunks.append(' # ')
chunks.append(comment)
chunks.append('\n')
lines.append(''.join(chunks))
dumpstruct[filename] = ''.join(lines)
return dumpstruct
def _choice(self, new, old):
if new is None:
return old
return new
def modify(self, file, n, enabled=None, source=None, comment=None):
'''
This function to be used with iterator, so we don't care of invalid sources.
If source, enabled, or comment is None, original value from line ``n`` will be preserved.
'''
valid, enabled_old, source_old, comment_old = self.files[file][n][1:]
self.files[file][n] = (n, valid, self._choice(enabled, enabled_old), self._choice(source, source_old), self._choice(comment, comment_old))
def _add_valid_source(self, source_new, comment_new, file):
# We'll try to reuse disabled source if we have it.
# If we have more than one entry, we will enable them all - no advanced logic, remember.
self.module.log('ading source file: %s | %s | %s' % (source_new, comment_new, file))
found = False
for filename, n, enabled, source, comment in self:
if source == source_new:
self.modify(filename, n, enabled=True)
found = True
if not found:
if file is None:
file = self.default_file
else:
file = self._expand_path(file)
if file not in self.files:
self.files[file] = []
files = self.files[file]
files.append((len(files), True, True, source_new, comment_new))
self.new_repos.add(file)
def add_source(self, line, comment='', file=None):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
# Prefer separate files for new sources.
self._add_valid_source(source, comment, file=file or self._suggest_filename(source))
def _remove_valid_source(self, source):
# If we have more than one entry, we will remove them all (not comment, remove!)
for filename, n, enabled, src, comment in self:
if source == src and enabled:
self.files[filename].pop(n)
def remove_source(self, line):
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
class UbuntuSourcesList(SourcesList):
LP_API = 'https://launchpad.net/api/1.0/~%s/+archive/%s'
def __init__(self, module):
self.module = module
self.codename = module.params['codename'] or distro.codename
super(UbuntuSourcesList, self).__init__(module)
self.apt_key_bin = self.module.get_bin_path('apt-key', required=False)
self.gpg_bin = self.module.get_bin_path('gpg', required=False)
if not self.apt_key_bin and not self.gpg_bin:
self.module.fail_json(msg='Either apt-key or gpg binary is required, but neither could be found')
def __deepcopy__(self, memo=None):
return UbuntuSourcesList(self.module)
def _get_ppa_info(self, owner_name, ppa_name):
lp_api = self.LP_API % (owner_name, ppa_name)
headers = dict(Accept='application/json')
response, info = fetch_url(self.module, lp_api, headers=headers)
if info['status'] != 200:
self.module.fail_json(msg="failed to fetch PPA information, error was: %s" % info['msg'])
return json.loads(to_native(response.read()))
def _expand_ppa(self, path):
ppa = path.split(':')[1]
ppa_owner = ppa.split('/')[0]
try:
ppa_name = ppa.split('/')[1]
except IndexError:
ppa_name = 'ppa'
line = 'deb http://ppa.launchpad.net/%s/%s/ubuntu %s main' % (ppa_owner, ppa_name, self.codename)
return line, ppa_owner, ppa_name
def _key_already_exists(self, key_fingerprint):
if self.apt_key_bin:
locale = get_best_parsable_locale(self.module)
APT_ENV = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LC_CTYPE=locale)
self.module.run_command_environ_update = APT_ENV
rc, out, err = self.module.run_command([self.apt_key_bin, 'export', key_fingerprint], check_rc=True)
found = bool(not err or 'nothing exported' not in err)
else:
found = self._gpg_key_exists(key_fingerprint)
return found
def _gpg_key_exists(self, key_fingerprint):
found = False
keyfiles = ['/etc/apt/trusted.gpg'] # main gpg repo for apt
for other_dir in APT_KEY_DIRS:
# add other known sources of gpg sigs for apt, skip hidden files
keyfiles.extend([os.path.join(other_dir, x) for x in os.listdir(other_dir) if not x.startswith('.')])
for key_file in keyfiles:
if os.path.exists(key_file):
try:
rc, out, err = self.module.run_command([self.gpg_bin, '--list-packets', key_file])
except (IOError, OSError) as e:
self.debug("Could check key against file %s: %s" % (key_file, to_native(e)))
continue
if key_fingerprint in out:
found = True
break
return found
# https://www.linuxuprising.com/2021/01/apt-key-is-deprecated-how-to-add.html
def add_source(self, line, comment='', file=None):
if line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(line)
if source in self.repos_urls:
# repository already exists
return
info = self._get_ppa_info(ppa_owner, ppa_name)
# add gpg sig if needed
if not self._key_already_exists(info['signing_key_fingerprint']):
# TODO: report file that would have been added if not check_mode
keyfile = ''
if not self.module.check_mode:
if self.apt_key_bin:
command = [self.apt_key_bin, 'adv', '--recv-keys', '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80',
info['signing_key_fingerprint']]
else:
# use first available key dir, in order of preference
for keydir in APT_KEY_DIRS:
if os.path.exists(keydir):
break
else:
self.module.fail_json("Unable to find any existing apt gpgp repo directories, tried the following: %s" % ', '.join(APT_KEY_DIRS))
keyfile = '%s/%s-%s-%s.gpg' % (keydir, os.path.basename(source).replace(' ', '-'), ppa_owner, ppa_name)
command = [self.gpg_bin, '--no-tty', '--keyserver', 'hkp://keyserver.ubuntu.com:80', '--export', info['signing_key_fingerprint']]
rc, stdout, stderr = self.module.run_command(command, check_rc=True, encoding=None)
if keyfile:
# using gpg we must write keyfile ourselves
if len(stdout) == 0:
self.module.fail_json(msg='Unable to get required signing key', rc=rc, stderr=stderr, command=command)
try:
with open(keyfile, 'wb') as f:
f.write(stdout)
self.module.log('Added repo key "%s" for apt to file "%s"' % (info['signing_key_fingerprint'], keyfile))
except (OSError, IOError) as e:
self.module.fail_json(msg='Unable to add required signing key for%s ', rc=rc, stderr=stderr, error=to_native(e))
# apt source file
file = file or self._suggest_filename('%s_%s' % (line, self.codename))
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
file = file or self._suggest_filename(source)
self._add_valid_source(source, comment, file)
def remove_source(self, line):
if line.startswith('ppa:'):
source = self._expand_ppa(line)[0]
else:
source = self._parse(line, raise_if_invalid_or_disabled=True)[2]
self._remove_valid_source(source)
@property
def repos_urls(self):
_repositories = []
for parsed_repos in self.files.values():
for parsed_repo in parsed_repos:
valid = parsed_repo[1]
enabled = parsed_repo[2]
source_line = parsed_repo[3]
if not valid or not enabled:
continue
if source_line.startswith('ppa:'):
source, ppa_owner, ppa_name = self._expand_ppa(source_line)
_repositories.append(source)
else:
_repositories.append(source_line)
return _repositories
def revert_sources_list(sources_before, sources_after, sourceslist_before):
'''Revert the sourcelist files to their previous state.'''
# First remove any new files that were created:
for filename in set(sources_after.keys()).difference(sources_before.keys()):
if os.path.exists(filename):
os.remove(filename)
# Now revert the existing files to their former state:
sourceslist_before.save()
def main():
module = AnsibleModule(
argument_spec=dict(
repo=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
mode=dict(type='raw'),
update_cache=dict(type='bool', default=True, aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
filename=dict(type='str'),
# This should not be needed, but exists as a failsafe
install_python_apt=dict(type='bool', default=True),
validate_certs=dict(type='bool', default=True),
codename=dict(type='str'),
),
supports_check_mode=True,
)
params = module.params
repo = module.params['repo']
state = module.params['state']
update_cache = module.params['update_cache']
# Note: mode is referenced in SourcesList class via the passed in module (self here)
sourceslist = None
if not HAVE_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
if params['install_python_apt']:
install_python_apt(module, apt_pkg_name)
else:
module.fail_json(msg='%s is not installed, and install_python_apt is False' % apt_pkg_name)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
if not repo:
module.fail_json(msg='Please set argument \'repo\' to a non-empty value')
if isinstance(distro, aptsources_distro.Distribution):
sourceslist = UbuntuSourcesList(module)
else:
module.fail_json(msg='Module apt_repository is not supported on target.')
sourceslist_before = copy.deepcopy(sourceslist)
sources_before = sourceslist.dump()
try:
if state == 'present':
sourceslist.add_source(repo)
elif state == 'absent':
sourceslist.remove_source(repo)
except InvalidSource as ex:
module.fail_json(msg='Invalid repository string: %s' % to_native(ex))
sources_after = sourceslist.dump()
changed = sources_before != sources_after
diff = []
sources_added = set()
sources_removed = set()
if changed:
sources_added = set(sources_after.keys()).difference(sources_before.keys())
sources_removed = set(sources_before.keys()).difference(sources_after.keys())
if module._diff:
for filename in set(sources_added.union(sources_removed)):
diff.append({'before': sources_before.get(filename, ''),
'after': sources_after.get(filename, ''),
'before_header': (filename, '/dev/null')[filename not in sources_before],
'after_header': (filename, '/dev/null')[filename not in sources_after]})
if changed and not module.check_mode:
try:
sourceslist.save()
if update_cache:
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
cache = apt.Cache()
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff with a max fail count, plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
except (OSError, IOError) as ex:
revert_sources_list(sources_before, sources_after, sourceslist_before)
module.fail_json(msg=to_native(ex))
module.exit_json(changed=changed, repo=repo, sources_added=sources_added, sources_removed=sources_removed, state=state, diff=diff)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,887 |
dnf5 module broken due to API change
|
### Summary
This change to dnf5 is the cause:
https://github.com/rpm-software-management/dnf5/commit/69cc522e7cc8507ede07451fcd065084c85a3293
Potential fix:
```diff
diff --git a/lib/ansible/modules/dnf5.py b/lib/ansible/modules/dnf5.py
index 53dd57d49b..362a9a3d80 100644
--- a/lib/ansible/modules/dnf5.py
+++ b/lib/ansible/modules/dnf5.py
@@ -513,6 +513,8 @@ class Dnf5Module(YumDnf):
conf.installroot = self.installroot
conf.use_host_config = True # needed for installroot
conf.cacheonly = self.cacheonly
+ if self.download_dir:
+ conf.destdir = self.download_dir
base.setup()
@@ -667,7 +669,7 @@ class Dnf5Module(YumDnf):
if results:
msg = "Check mode: No changes made, but would have if not in check mode"
else:
- transaction.download(self.download_dir or "")
+ transaction.download()
if not self.download_only:
if not self.disable_gpg_check and not transaction.check_gpg_signatures():
self.module.fail_json(
```
I'm unsure if the `if` statement is needed, I didn't test that as of yet.
Also, unclear if we should attempt to support both variants, where `download` accepts the dir, vs not. I lean towards not worrying with it.
### Issue Type
Bug Report
### Component Name
dnf5
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
fedora37
### Steps to Reproduce
Run dnf5 intg tests
### Expected Results
No traceback
### Actual Results
```console
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 133, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 125, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 73, in invoke_module
runpy.run_module(mod_name='ansible.modules.dnf5', init_globals=dict(_module_fqn='ansible.modules.dnf5', _modlib_path=modlib_path),
File "<frozen runpy>", line 226, in run_module
File "<frozen runpy>", line 98, in _run_module_code
File "<frozen runpy>", line 88, in _run_code
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 708, in <module>
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 704, in main
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 670, in run
TypeError: Transaction.download() takes 1 positional argument but 2 were given
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80887
|
https://github.com/ansible/ansible/pull/80888
|
0775e991d51e2fe9c38a4d862cd32a9f704d4915
|
09387eaa24103581d7538ead8918ee5328a82697
| 2023-05-25T14:43:33Z |
python
| 2023-05-25T16:13:38Z |
changelogs/fragments/80887-dnf5-api-change.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,887 |
dnf5 module broken due to API change
|
### Summary
This change to dnf5 is the cause:
https://github.com/rpm-software-management/dnf5/commit/69cc522e7cc8507ede07451fcd065084c85a3293
Potential fix:
```diff
diff --git a/lib/ansible/modules/dnf5.py b/lib/ansible/modules/dnf5.py
index 53dd57d49b..362a9a3d80 100644
--- a/lib/ansible/modules/dnf5.py
+++ b/lib/ansible/modules/dnf5.py
@@ -513,6 +513,8 @@ class Dnf5Module(YumDnf):
conf.installroot = self.installroot
conf.use_host_config = True # needed for installroot
conf.cacheonly = self.cacheonly
+ if self.download_dir:
+ conf.destdir = self.download_dir
base.setup()
@@ -667,7 +669,7 @@ class Dnf5Module(YumDnf):
if results:
msg = "Check mode: No changes made, but would have if not in check mode"
else:
- transaction.download(self.download_dir or "")
+ transaction.download()
if not self.download_only:
if not self.disable_gpg_check and not transaction.check_gpg_signatures():
self.module.fail_json(
```
I'm unsure if the `if` statement is needed, I didn't test that as of yet.
Also, unclear if we should attempt to support both variants, where `download` accepts the dir, vs not. I lean towards not worrying with it.
### Issue Type
Bug Report
### Component Name
dnf5
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
fedora37
### Steps to Reproduce
Run dnf5 intg tests
### Expected Results
No traceback
### Actual Results
```console
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 133, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 125, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1685024184.2164207-10418-220684455568000/AnsiballZ_dnf5.py", line 73, in invoke_module
runpy.run_module(mod_name='ansible.modules.dnf5', init_globals=dict(_module_fqn='ansible.modules.dnf5', _modlib_path=modlib_path),
File "<frozen runpy>", line 226, in run_module
File "<frozen runpy>", line 98, in _run_module_code
File "<frozen runpy>", line 88, in _run_code
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 708, in <module>
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 704, in main
File "/tmp/ansible_ansible.legacy.dnf5_payload_n4l7o4kv/ansible_ansible.legacy.dnf5_payload.zip/ansible/modules/dnf5.py", line 670, in run
TypeError: Transaction.download() takes 1 positional argument but 2 were given
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80887
|
https://github.com/ansible/ansible/pull/80888
|
0775e991d51e2fe9c38a4d862cd32a9f704d4915
|
09387eaa24103581d7538ead8918ee5328a82697
| 2023-05-25T14:43:33Z |
python
| 2023-05-25T16:13:38Z |
lib/ansible/modules/dnf5.py
|
# -*- coding: utf-8 -*-
# Copyright 2023 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = """
module: dnf5
author: Ansible Core Team
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf5) package manager.
- "WARNING: The I(dnf5) package manager is still under development and not all features that the existing I(dnf) module
provides are implemented in I(dnf5), please consult specific options for more information."
short_description: Manages packages with the I(dnf5) package manager
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name >= 1.0).
Spaces around the operator are required.
- You can also pass an absolute path for a binary which is provided by the package to install.
See examples for more information.
aliases:
- pkg
type: list
elements: str
default: []
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks.
Use M(ansible.builtin.package_facts) instead of the C(list) argument as a best practice.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
default: []
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
type: str
autoremove:
description:
- If C(true), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
type: list
elements: str
default: []
skip_broken:
description:
- Skip all unavailable packages or packages with broken dependencies
without raising an error. Equivalent to passing the --skip-broken option.
type: bool
default: "no"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
security:
description:
- If set to C(true), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
type: bool
default: "no"
bugfix:
description:
- If set to C(true), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to C(dnf upgrade-minimal), this filter applies to dependencies as well.
default: "no"
type: bool
enable_plugin:
description:
- This is currently a no-op as dnf5 itself does not implement this feature.
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
type: list
elements: str
default: []
disable_plugin:
description:
- This is currently a no-op as dnf5 itself does not implement this feature.
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
type: list
default: []
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
type: str
validate_certs:
description:
- This is effectively a no-op in the dnf5 module as dnf5 itself handles downloading a https url as the source of the rpm,
but is an accepted parameter for feature parity/compatibility with the I(yum) module.
type: bool
default: "yes"
sslverify:
description:
- Disables SSL validation of the repository server for this transaction.
- This should be set to C(false) if one of the configured repositories is using an untrusted or self-signed certificate.
type: bool
default: "yes"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
lock_timeout:
description:
- This is currently a no-op as dnf5 does not provide an option to configure it.
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
allowerasing:
description:
- If C(true) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
cacheonly:
description:
- This is currently no-op as dnf5 does not implement the feature.
- Tells dnf to run entirely from system cache; does not download or update metadata.
type: bool
default: "no"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
attributes:
action:
details: In the case of dnf, it has 2 action plugins that use it under the hood, M(ansible.builtin.yum) and M(ansible.builtin.package).
support: partial
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: rhel
requirements:
- "python3"
- "python3-libdnf5"
version_added: 2.15
"""
EXAMPLES = """
- name: Install the latest version of Apache
ansible.builtin.dnf5:
name: httpd
state: latest
- name: Install Apache >= 2.4
ansible.builtin.dnf5:
name: httpd >= 2.4
state: present
- name: Install the latest version of Apache and MariaDB
ansible.builtin.dnf5:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
ansible.builtin.dnf5:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
ansible.builtin.dnf5:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
ansible.builtin.dnf5:
name: "*"
state: latest
- name: Update the webserver, depending on which is installed on the system. Do not install the other one
ansible.builtin.dnf5:
name:
- httpd
- nginx
state: latest
update_only: yes
- name: Install the nginx rpm from a remote repo
ansible.builtin.dnf5:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
ansible.builtin.dnf5:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install Package based upon the file it provides
ansible.builtin.dnf5:
name: /usr/bin/cowsay
state: present
- name: Install the 'Development tools' package group
ansible.builtin.dnf5:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
ansible.builtin.dnf5:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
ansible.builtin.dnf5:
name: httpd
state: absent
autoremove: no
"""
RETURN = """
msg:
description: Additional information about the result
returned: always
type: str
sample: "Nothing to do"
results:
description: A list of the dnf transaction results
returned: success
type: list
sample: ["Installed: lsof-4.94.0-4.fc37.x86_64"]
failures:
description: A list of the dnf transaction failures
returned: failure
type: list
sample: ["Argument 'lsof' matches only excluded packages."]
rc:
description: For compatibility, 0 for success, 1 for failure
returned: always
type: int
sample: 0
"""
import os
import sys
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
libdnf5 = None
def is_installed(base, spec):
settings = libdnf5.base.ResolveSpecSettings()
query = libdnf5.rpm.PackageQuery(base)
query.filter_installed()
match, nevra = query.resolve_pkg_spec(spec, settings, True)
return match
def is_newer_version_installed(base, spec):
try:
spec_nevra = next(iter(libdnf5.rpm.Nevra.parse(spec)))
except RuntimeError:
return False
spec_name = spec_nevra.get_name()
v = spec_nevra.get_version()
r = spec_nevra.get_release()
if not v or not r:
return False
spec_evr = "{}:{}-{}".format(spec_nevra.get_epoch() or "0", v, r)
query = libdnf5.rpm.PackageQuery(base)
query.filter_installed()
query.filter_name([spec_name])
query.filter_evr([spec_evr], libdnf5.common.QueryCmp_GT)
return query.size() > 0
def package_to_dict(package):
return {
"nevra": package.get_nevra(),
"envra": package.get_nevra(), # dnf module compat
"name": package.get_name(),
"arch": package.get_arch(),
"epoch": str(package.get_epoch()),
"release": package.get_release(),
"version": package.get_version(),
"repo": package.get_repo_id(),
"yumstate": "installed" if package.is_installed() else "available",
}
def get_unneeded_pkgs(base):
query = libdnf5.rpm.PackageQuery(base)
query.filter_installed()
query.filter_unneeded()
for pkg in query:
yield pkg
class Dnf5Module(YumDnf):
def __init__(self, module):
super(Dnf5Module, self).__init__(module)
self._ensure_dnf()
# FIXME https://github.com/rpm-software-management/dnf5/issues/402
self.lockfile = ""
self.pkg_mgr_name = "dnf5"
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params["allowerasing"]
self.nobest = self.module.params["nobest"]
def _ensure_dnf(self):
locale = get_best_parsable_locale(self.module)
os.environ["LC_ALL"] = os.environ["LC_MESSAGES"] = locale
os.environ["LANGUAGE"] = os.environ["LANG"] = locale
global libdnf5
has_dnf = True
try:
import libdnf5 # type: ignore[import]
except ImportError:
has_dnf = False
if has_dnf:
return
system_interpreters = [
"/usr/libexec/platform-python",
"/usr/bin/python3",
"/usr/bin/python2",
"/usr/bin/python",
]
if not has_respawned():
# probe well-known system Python locations for accessible bindings, favoring py3
interpreter = probe_interpreters_for_module(system_interpreters, "libdnf5")
if interpreter:
# respawn under the interpreter where the bindings should be found
respawn_module(interpreter)
# end of the line for this module, the process will exit here once the respawned module completes
# done all we can do, something is just broken (auto-install isn't useful anymore with respawn, so it was removed)
self.module.fail_json(
msg="Could not import the libdnf5 python module using {0} ({1}). "
"Please install python3-libdnf5 package or ensure you have specified the "
"correct ansible_python_interpreter. (attempted {2})".format(
sys.executable, sys.version.replace("\n", ""), system_interpreters
),
failures=[],
)
def is_lockfile_pid_valid(self):
# FIXME https://github.com/rpm-software-management/dnf5/issues/402
return True
def run(self):
if sys.version_info.major < 3:
self.module.fail_json(
msg="The dnf5 module requires Python 3.",
failures=[],
rc=1,
)
if not self.list and not self.download_only and os.geteuid() != 0:
self.module.fail_json(
msg="This command has to be run under the root user.",
failures=[],
rc=1,
)
if self.enable_plugin or self.disable_plugin:
self.module.fail_json(
msg="enable_plugin and disable_plugin options are not yet implemented in DNF5",
failures=[],
rc=1,
)
base = libdnf5.base.Base()
conf = base.get_config()
if self.conf_file:
conf.config_file_path = self.conf_file
try:
base.load_config_from_file()
except RuntimeError as e:
self.module.fail_json(
msg=str(e),
conf_file=self.conf_file,
failures=[],
rc=1,
)
if self.releasever is not None:
variables = base.get_vars()
variables.set("releasever", self.releasever)
if self.exclude:
conf.excludepkgs = self.exclude
if self.disable_excludes:
if self.disable_excludes == "all":
self.disable_excludes = "*"
conf.disable_excludes = self.disable_excludes
conf.skip_broken = self.skip_broken
conf.best = not self.nobest
conf.install_weak_deps = self.install_weak_deps
conf.gpgcheck = not self.disable_gpg_check
conf.localpkg_gpgcheck = not self.disable_gpg_check
conf.sslverify = self.sslverify
conf.clean_requirements_on_remove = self.autoremove
conf.installroot = self.installroot
conf.use_host_config = True # needed for installroot
conf.cacheonly = self.cacheonly
base.setup()
log_router = base.get_logger()
global_logger = libdnf5.logger.GlobalLogger()
global_logger.set(log_router.get(), libdnf5.logger.Logger.Level_DEBUG)
logger = libdnf5.logger.create_file_logger(base)
log_router.add_logger(logger)
if self.update_cache:
repo_query = libdnf5.repo.RepoQuery(base)
repo_query.filter_type(libdnf5.repo.Repo.Type_AVAILABLE)
for repo in repo_query:
repo_dir = repo.get_cachedir()
if os.path.exists(repo_dir):
repo_cache = libdnf5.repo.RepoCache(base, repo_dir)
repo_cache.write_attribute(libdnf5.repo.RepoCache.ATTRIBUTE_EXPIRED)
sack = base.get_repo_sack()
sack.create_repos_from_system_configuration()
repo_query = libdnf5.repo.RepoQuery(base)
if self.disablerepo:
repo_query.filter_id(self.disablerepo, libdnf5.common.QueryCmp_IGLOB)
for repo in repo_query:
repo.disable()
if self.enablerepo:
repo_query.filter_id(self.enablerepo, libdnf5.common.QueryCmp_IGLOB)
for repo in repo_query:
repo.enable()
sack.update_and_load_enabled_repos(True)
if self.update_cache and not self.names and not self.list:
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
if self.list:
command = self.list
if command == "updates":
command = "upgrades"
if command in {"installed", "upgrades", "available"}:
query = libdnf5.rpm.PackageQuery(base)
getattr(query, "filter_{}".format(command))()
results = [package_to_dict(package) for package in query]
elif command in {"repos", "repositories"}:
query = libdnf5.repo.RepoQuery(base)
query.filter_enabled(True)
results = [{"repoid": repo.get_id(), "state": "enabled"} for repo in query]
else:
resolve_spec_settings = libdnf5.base.ResolveSpecSettings()
query = libdnf5.rpm.PackageQuery(base)
query.resolve_pkg_spec(command, resolve_spec_settings, True)
results = [package_to_dict(package) for package in query]
self.module.exit_json(msg="", results=results, rc=0)
settings = libdnf5.base.GoalJobSettings()
settings.group_with_name = True
if self.bugfix or self.security:
advisory_query = libdnf5.advisory.AdvisoryQuery(base)
types = []
if self.bugfix:
types.append("bugfix")
if self.security:
types.append("security")
advisory_query.filter_type(types)
settings.set_advisory_filter(advisory_query)
goal = libdnf5.base.Goal(base)
results = []
if self.names == ["*"] and self.state == "latest":
goal.add_rpm_upgrade(settings)
elif self.state in {"install", "present", "latest"}:
upgrade = self.state == "latest"
for spec in self.names:
if is_newer_version_installed(base, spec):
if self.allow_downgrade:
if upgrade:
if is_installed(base, spec):
goal.add_upgrade(spec, settings)
else:
goal.add_install(spec, settings)
else:
goal.add_install(spec, settings)
elif is_installed(base, spec):
if upgrade:
goal.add_upgrade(spec, settings)
else:
if self.update_only:
results.append("Packages providing {} not installed due to update_only specified".format(spec))
else:
goal.add_install(spec, settings)
elif self.state in {"absent", "removed"}:
for spec in self.names:
try:
goal.add_remove(spec, settings)
except RuntimeError as e:
self.module.fail_json(msg=str(e), failures=[], rc=1)
if self.autoremove:
for pkg in get_unneeded_pkgs(base):
goal.add_rpm_remove(pkg, settings)
goal.set_allow_erasing(self.allowerasing)
try:
transaction = goal.resolve()
except RuntimeError as e:
self.module.fail_json(msg=str(e), failures=[], rc=1)
if transaction.get_problems():
failures = []
for log_event in transaction.get_resolve_logs():
if log_event.get_problem() == libdnf5.base.GoalProblem_NOT_FOUND and self.state in {"install", "present", "latest"}:
# NOTE dnf module compat
failures.append("No package {} available.".format(log_event.get_spec()))
else:
failures.append(log_event.to_string())
if transaction.get_problems() & libdnf5.base.GoalProblem_SOLVER_ERROR != 0:
msg = "Depsolve Error occurred"
else:
msg = "Failed to install some of the specified packages"
self.module.fail_json(
msg=msg,
failures=failures,
rc=1,
)
# NOTE dnf module compat
actions_compat_map = {
"Install": "Installed",
"Remove": "Removed",
"Replace": "Installed",
"Upgrade": "Installed",
"Replaced": "Removed",
}
changed = bool(transaction.get_transaction_packages())
for pkg in transaction.get_transaction_packages():
if self.download_only:
action = "Downloaded"
else:
action = libdnf5.base.transaction.transaction_item_action_to_string(pkg.get_action())
results.append("{}: {}".format(actions_compat_map.get(action, action), pkg.get_package().get_nevra()))
msg = ""
if self.module.check_mode:
if results:
msg = "Check mode: No changes made, but would have if not in check mode"
else:
transaction.download(self.download_dir or "")
if not self.download_only:
if not self.disable_gpg_check and not transaction.check_gpg_signatures():
self.module.fail_json(
msg="Failed to validate GPG signatures: {}".format(",".join(transaction.get_gpg_signature_problems())),
failures=[],
rc=1,
)
transaction.set_description("ansible dnf5 module")
result = transaction.run()
if result != libdnf5.base.Transaction.TransactionRunResult_SUCCESS:
self.module.fail_json(
msg="Failed to install some of the specified packages",
failures=["{}: {}".format(transaction.transaction_result_to_string(result), log) for log in transaction.get_transaction_problems()],
rc=1,
)
if not msg and not results:
msg = "Nothing to do"
self.module.exit_json(
results=results,
changed=changed,
msg=msg,
rc=0,
)
def main():
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec["argument_spec"]["allowerasing"] = dict(default=False, type="bool")
yumdnf_argument_spec["argument_spec"]["nobest"] = dict(default=False, type="bool")
Dnf5Module(AnsibleModule(**yumdnf_argument_spec)).run()
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,126 |
More detailed description about how command module's argv parameters works
|
### Summary
Writing this following #79967, which I created because I thought that I encountered a bug, but in fact I didn't know some details about how `argv` parameter works.
It would be great to have information provided to me there in the documentation in case anybody else encounter the same issues. Especially this comment was very helpful:
> precisely because you can use white space you need to separate what is not the same argument or the flag + argument are passed as a unit '--flag arg' vs '--flag' 'arg'
### Issue Type
Documentation Report
### Component Name
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/command.py
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Linux
### Additional Information
When the improvement is applied, it makes it more straightforward to understand how argv parameters works
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80126
|
https://github.com/ansible/ansible/pull/80933
|
79677c16f175e52a0dee9f2d366775e8ed0c8231
|
c069cf88debe9f1b5d306ee93db366325f4d16e1
| 2023-03-02T15:56:21Z |
python
| 2023-06-01T19:05:45Z |
lib/ansible/modules/command.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>, and others
# Copyright: (c) 2016, Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: command
short_description: Execute commands on targets
version_added: historical
description:
- The C(command) module takes the command name followed by a list of space-delimited arguments.
- The given command will be executed on all selected nodes.
- The command(s) will not be
processed through the shell, so variables like C($HOSTNAME) and operations
like C("*"), C("<"), C(">"), C("|"), C(";") and C("&") will not work.
Use the M(ansible.builtin.shell) module if you need these features.
- To create C(command) tasks that are easier to read than the ones using space-delimited
arguments, pass parameters using the C(args) L(task keyword,https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html#task)
or use C(cmd) parameter.
- Either a free form command or C(cmd) parameter is required, see the examples.
- For Windows targets, use the M(ansible.windows.win_command) module instead.
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.raw
attributes:
check_mode:
details: while the command itself is arbitrary and cannot be subject to the check mode semantics it adds C(creates)/C(removes) options as a workaround
support: partial
diff_mode:
support: none
platform:
support: full
platforms: posix
raw:
support: full
options:
expand_argument_vars:
description:
- Expands the arguments that are variables, for example C($HOME) will be expanded before being passed to the
command to run.
- Set to C(false) to disable expansion and treat the value as a literal argument.
type: bool
default: true
version_added: "2.16"
free_form:
description:
- The command module takes a free form string as a command to run.
- There is no actual parameter named 'free form'.
cmd:
type: str
description:
- The command to run.
argv:
type: list
elements: str
description:
- Passes the command as a list rather than a string.
- Use C(argv) to avoid quoting values that would otherwise be interpreted incorrectly (for example "user name").
- Only the string (free form) or the list (argv) form can be provided, not both. One or the other must be provided.
version_added: "2.6"
creates:
type: path
description:
- A filename or (since 2.0) glob pattern. If a matching file already exists, this step B(will not) be run.
- This is checked before I(removes) is checked.
removes:
type: path
description:
- A filename or (since 2.0) glob pattern. If a matching file exists, this step B(will) be run.
- This is checked after I(creates) is checked.
version_added: "0.8"
chdir:
type: path
description:
- Change into this directory before running the command.
version_added: "0.6"
stdin:
description:
- Set the stdin of the command directly to the specified value.
type: str
version_added: "2.4"
stdin_add_newline:
type: bool
default: yes
description:
- If set to C(true), append a newline to stdin data.
version_added: "2.8"
strip_empty_ends:
description:
- Strip empty lines from the end of stdout/stderr in result.
version_added: "2.8"
type: bool
default: yes
notes:
- If you want to run a command through the shell (say you are using C(<), C(>), C(|), and so on),
you actually want the M(ansible.builtin.shell) module instead.
Parsing shell metacharacters can lead to unexpected commands being executed if quoting is not done correctly so it is more secure to
use the C(command) module when possible.
- C(creates), C(removes), and C(chdir) can be specified after the command.
For instance, if you only want to run a command if a certain file does not exist, use this.
- Check mode is supported when passing C(creates) or C(removes). If running in check mode and either of these are specified, the module will
check for the existence of the file and report the correct changed status. If these are not supplied, the task will be skipped.
- The C(executable) parameter is removed since version 2.4. If you have a need for this parameter, use the M(ansible.builtin.shell) module instead.
- For Windows targets, use the M(ansible.windows.win_command) module instead.
- For rebooting systems, use the M(ansible.builtin.reboot) or M(ansible.windows.win_reboot) module.
- If the command returns non UTF-8 data, it must be encoded to avoid issues. This may necessitate using M(ansible.builtin.shell) so the output
can be piped through C(base64).
seealso:
- module: ansible.builtin.raw
- module: ansible.builtin.script
- module: ansible.builtin.shell
- module: ansible.windows.win_command
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Return motd to registered var
ansible.builtin.command: cat /etc/motd
register: mymotd
# free-form (string) arguments, all arguments on one line
- name: Run command if /path/to/database does not exist (without 'args')
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name creates=/path/to/database
# free-form (string) arguments, some arguments on separate lines with the 'args' keyword
# 'args' is a task keyword, passed at the same level as the module
- name: Run command if /path/to/database does not exist (with 'args' keyword)
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name
args:
creates: /path/to/database
# 'cmd' is module parameter
- name: Run command if /path/to/database does not exist (with 'cmd' parameter)
ansible.builtin.command:
cmd: /usr/bin/make_database.sh db_user db_name
creates: /path/to/database
- name: Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist
ansible.builtin.command: /usr/bin/make_database.sh db_user db_name
become: yes
become_user: db_owner
args:
chdir: somedir/
creates: /path/to/database
# argv (list) arguments, each argument on a separate line, 'args' keyword not necessary
# 'argv' is a parameter, indented one level from the module
- name: Use 'argv' to send a command as a list - leave 'command' empty
ansible.builtin.command:
argv:
- /usr/bin/make_database.sh
- Username with whitespace
- dbname with whitespace
creates: /path/to/database
- name: Safely use templated variable to run command. Always use the quote filter to avoid injection issues
ansible.builtin.command: cat {{ myfile|quote }}
register: myoutput
'''
RETURN = r'''
msg:
description: changed
returned: always
type: bool
sample: True
start:
description: The command execution start time.
returned: always
type: str
sample: '2017-09-29 22:03:48.083128'
end:
description: The command execution end time.
returned: always
type: str
sample: '2017-09-29 22:03:48.084657'
delta:
description: The command execution delta time.
returned: always
type: str
sample: '0:00:00.001529'
stdout:
description: The command standard output.
returned: always
type: str
sample: 'Clustering node rabbit@slave1 with rabbit@master …'
stderr:
description: The command standard error.
returned: always
type: str
sample: 'ls cannot access foo: No such file or directory'
cmd:
description: The command executed by the task.
returned: always
type: list
sample:
- echo
- hello
rc:
description: The command return code (0 means success).
returned: always
type: int
sample: 0
stdout_lines:
description: The command standard output split in lines.
returned: always
type: list
sample: [u'Clustering node rabbit@slave1 with rabbit@master …']
stderr_lines:
description: The command standard error split in lines.
returned: always
type: list
sample: [u'ls cannot access foo: No such file or directory', u'ls …']
'''
import datetime
import glob
import os
import shlex
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.converters import to_native, to_bytes, to_text
from ansible.module_utils.common.collections import is_iterable
def main():
# the command module is the one ansible module that does not take key=value args
# hence don't copy this one if you are looking to build others!
# NOTE: ensure splitter.py is kept in sync for exceptions
module = AnsibleModule(
argument_spec=dict(
_raw_params=dict(),
_uses_shell=dict(type='bool', default=False),
argv=dict(type='list', elements='str'),
chdir=dict(type='path'),
executable=dict(),
expand_argument_vars=dict(type='bool', default=True),
creates=dict(type='path'),
removes=dict(type='path'),
# The default for this really comes from the action plugin
stdin=dict(required=False),
stdin_add_newline=dict(type='bool', default=True),
strip_empty_ends=dict(type='bool', default=True),
),
supports_check_mode=True,
)
shell = module.params['_uses_shell']
chdir = module.params['chdir']
executable = module.params['executable']
args = module.params['_raw_params']
argv = module.params['argv']
creates = module.params['creates']
removes = module.params['removes']
stdin = module.params['stdin']
stdin_add_newline = module.params['stdin_add_newline']
strip = module.params['strip_empty_ends']
expand_argument_vars = module.params['expand_argument_vars']
# we promissed these in 'always' ( _lines get autoaded on action plugin)
r = {'changed': False, 'stdout': '', 'stderr': '', 'rc': None, 'cmd': None, 'start': None, 'end': None, 'delta': None, 'msg': ''}
if not shell and executable:
module.warn("As of Ansible 2.4, the parameter 'executable' is no longer supported with the 'command' module. Not using '%s'." % executable)
executable = None
if (not args or args.strip() == '') and not argv:
r['rc'] = 256
r['msg'] = "no command given"
module.fail_json(**r)
if args and argv:
r['rc'] = 256
r['msg'] = "only command or argv can be given, not both"
module.fail_json(**r)
if not shell and args:
args = shlex.split(args)
args = args or argv
# All args must be strings
if is_iterable(args, include_strings=False):
args = [to_native(arg, errors='surrogate_or_strict', nonstring='simplerepr') for arg in args]
r['cmd'] = args
if chdir:
chdir = to_bytes(chdir, errors='surrogate_or_strict')
try:
os.chdir(chdir)
except (IOError, OSError) as e:
r['msg'] = 'Unable to change directory before execution: %s' % to_text(e)
module.fail_json(**r)
# check_mode partial support, since it only really works in checking creates/removes
if module.check_mode:
shoulda = "Would"
else:
shoulda = "Did"
# special skips for idempotence if file exists (assumes command creates)
if creates:
if glob.glob(creates):
r['msg'] = "%s not run command since '%s' exists" % (shoulda, creates)
r['stdout'] = "skipped, since %s exists" % creates # TODO: deprecate
r['rc'] = 0
# special skips for idempotence if file does not exist (assumes command removes)
if not r['msg'] and removes:
if not glob.glob(removes):
r['msg'] = "%s not run command since '%s' does not exist" % (shoulda, removes)
r['stdout'] = "skipped, since %s does not exist" % removes # TODO: deprecate
r['rc'] = 0
if r['msg']:
module.exit_json(**r)
r['changed'] = True
# actually executes command (or not ...)
if not module.check_mode:
r['start'] = datetime.datetime.now()
r['rc'], r['stdout'], r['stderr'] = module.run_command(args, executable=executable, use_unsafe_shell=shell, encoding=None,
data=stdin, binary_data=(not stdin_add_newline),
expand_user_and_vars=expand_argument_vars)
r['end'] = datetime.datetime.now()
else:
# this is partial check_mode support, since we end up skipping if we get here
r['rc'] = 0
r['msg'] = "Command would have run if not in check mode"
if creates is None and removes is None:
r['skipped'] = True
# skipped=True and changed=True are mutually exclusive
r['changed'] = False
# convert to text for jsonization and usability
if r['start'] is not None and r['end'] is not None:
# these are datetime objects, but need them as strings to pass back
r['delta'] = to_text(r['end'] - r['start'])
r['end'] = to_text(r['end'])
r['start'] = to_text(r['start'])
if strip:
r['stdout'] = to_text(r['stdout']).rstrip("\r\n")
r['stderr'] = to_text(r['stderr']).rstrip("\r\n")
if r['rc'] != 0:
r['msg'] = 'non-zero return code'
module.fail_json(**r)
module.exit_json(**r)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,943 |
Ansible-galaxy cannot install subdir requirements when upgraded to 8.0.0
|
### Summary
When trying to install multiple collections from local directory using `type: subdirs` in requirements.yml, `ansible-galaxy` throws raw python unhandled exception.
Problem started occur when updated ansible to `8.0.0`.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04, Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection init collections.testa && ansible-galaxy collection init collections.testb \
&& echo """
collections:
- source: ./collections
type: subdirs
""" > requirements.yml && ansible-galaxy install -r requirements.yml
```
### Expected Results
Same as in previous versions - installation of all collections inside the directory, using directory name as namespace.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: endswith first arg must be str or a tuple of str, not bytes
to see the full traceback, use -vvv
```
```pytb
Traceback (most recent call last):
File ".../ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 715, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 117, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 1369, in execute_install
self._execute_install_collection(
File ".../ansible/bin/ansible-galaxy", line 1409, in _execute_install_collection
install_collections(
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 681, in install_collections
unsatisfied_requirements = set(
^^^^
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 684, in <genexpr>
Requirement.from_dir_path(sub_coll, artifacts_manager)
File ".../ansible/lib/ansible/galaxy/dependency_resolution/dataclasses.py", line 221, in from_dir_path
if dir_path.endswith(to_bytes(os.path.sep)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: endswith first arg must be str or a tuple of str, not bytes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80943
|
https://github.com/ansible/ansible/pull/80949
|
c069cf88debe9f1b5d306ee93db366325f4d16e1
|
0982d5fa98e64d241249cfd6dd024e70ae20d0c3
| 2023-06-01T08:36:02Z |
python
| 2023-06-01T20:58:06Z |
changelogs/fragments/80943-ansible-galaxy-collection-subdir-install.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,943 |
Ansible-galaxy cannot install subdir requirements when upgraded to 8.0.0
|
### Summary
When trying to install multiple collections from local directory using `type: subdirs` in requirements.yml, `ansible-galaxy` throws raw python unhandled exception.
Problem started occur when updated ansible to `8.0.0`.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04, Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection init collections.testa && ansible-galaxy collection init collections.testb \
&& echo """
collections:
- source: ./collections
type: subdirs
""" > requirements.yml && ansible-galaxy install -r requirements.yml
```
### Expected Results
Same as in previous versions - installation of all collections inside the directory, using directory name as namespace.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: endswith first arg must be str or a tuple of str, not bytes
to see the full traceback, use -vvv
```
```pytb
Traceback (most recent call last):
File ".../ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 715, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 117, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 1369, in execute_install
self._execute_install_collection(
File ".../ansible/bin/ansible-galaxy", line 1409, in _execute_install_collection
install_collections(
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 681, in install_collections
unsatisfied_requirements = set(
^^^^
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 684, in <genexpr>
Requirement.from_dir_path(sub_coll, artifacts_manager)
File ".../ansible/lib/ansible/galaxy/dependency_resolution/dataclasses.py", line 221, in from_dir_path
if dir_path.endswith(to_bytes(os.path.sep)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: endswith first arg must be str or a tuple of str, not bytes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80943
|
https://github.com/ansible/ansible/pull/80949
|
c069cf88debe9f1b5d306ee93db366325f4d16e1
|
0982d5fa98e64d241249cfd6dd024e70ae20d0c3
| 2023-06-01T08:36:02Z |
python
| 2023-06-01T20:58:06Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import pathlib
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.compat.importlib_resources import files
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise AnsibleError(to_native(lookup_err)) from lookup_err
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
collection_meta['license_file'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
if concrete_coll_pin.type == 'galaxy':
concrete_coll_pin = concrete_coll_pin.with_signatures_repopulated()
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if artifacts_manager.keyring is None or not signatures:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns,
manifest_control, license_file):
# type: (bytes, str, str, list[str], dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
license_file,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control,
license_file):
# type: (bytes, str, str, dict[str, t.Any], t.Optional[str]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst *.license COPYING LICENSE',
'recursive-include .reuse **',
'recursive-include LICENSES **',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license',
'recursive-include roles **.yml **.yaml **.json **.j2 **.license',
'recursive-include playbooks **.yml **.yaml **.json **.license',
'recursive-include changelogs **.yml **.yaml **.license',
'recursive-include plugins */**.py */**.license',
])
if license_file:
directives.append(f'include {license_file}')
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml **.license',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__ *.pyc *.pyo *.bak *~ *.swp',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file, follow_symlinks=False).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
# ensure symlinks to dirs are not translated to empty dirs
if os.path.isdir(src_file) and not os.path.islink(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
# do not follow symlinks to ensure the original link is used
shutil.copyfile(src_file, dest_file, follow_symlinks=False)
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
if not os.path.islink(dest_file):
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def _normalize_collection_path(path):
str_path = path.as_posix() if isinstance(path, pathlib.Path) else path
return pathlib.Path(
# This is annoying, but GalaxyCLI._resolve_path did it
os.path.expandvars(str_path)
).expanduser().absolute()
def find_existing_collections(path_filter, artifacts_manager, namespace_filter=None, collection_filter=None, dedupe=True):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
if files is None:
raise AnsibleError('importlib_resources is not installed and is required')
if path_filter and not is_sequence(path_filter):
path_filter = [path_filter]
paths = set()
for path in files('ansible_collections').glob('*/*/'):
path = _normalize_collection_path(path)
if not path.is_dir():
continue
if path_filter:
for pf in path_filter:
try:
path.relative_to(_normalize_collection_path(pf))
except ValueError:
continue
break
else:
continue
paths.add(path)
seen = set()
for path in paths:
namespace = path.parent.name
name = path.name
if namespace_filter and namespace != namespace_filter:
continue
if collection_filter and name != collection_filter:
continue
if dedupe:
try:
collection_path = files(f'ansible_collections.{namespace}.{name}')
except ImportError:
continue
if collection_path in seen:
continue
seen.add(collection_path)
else:
collection_path = path
b_collection_path = to_bytes(collection_path.as_posix())
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
display.warning(f'{val_err}')
continue
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_meta['manifest'] = Sentinel
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
collection_meta['license_file'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise AnsibleError('\n'.join(error_msg_lines)) from dep_exc
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,943 |
Ansible-galaxy cannot install subdir requirements when upgraded to 8.0.0
|
### Summary
When trying to install multiple collections from local directory using `type: subdirs` in requirements.yml, `ansible-galaxy` throws raw python unhandled exception.
Problem started occur when updated ansible to `8.0.0`.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04, Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection init collections.testa && ansible-galaxy collection init collections.testb \
&& echo """
collections:
- source: ./collections
type: subdirs
""" > requirements.yml && ansible-galaxy install -r requirements.yml
```
### Expected Results
Same as in previous versions - installation of all collections inside the directory, using directory name as namespace.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: endswith first arg must be str or a tuple of str, not bytes
to see the full traceback, use -vvv
```
```pytb
Traceback (most recent call last):
File ".../ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 715, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 117, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 1369, in execute_install
self._execute_install_collection(
File ".../ansible/bin/ansible-galaxy", line 1409, in _execute_install_collection
install_collections(
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 681, in install_collections
unsatisfied_requirements = set(
^^^^
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 684, in <genexpr>
Requirement.from_dir_path(sub_coll, artifacts_manager)
File ".../ansible/lib/ansible/galaxy/dependency_resolution/dataclasses.py", line 221, in from_dir_path
if dir_path.endswith(to_bytes(os.path.sep)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: endswith first arg must be str or a tuple of str, not bytes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80943
|
https://github.com/ansible/ansible/pull/80949
|
c069cf88debe9f1b5d306ee93db366325f4d16e1
|
0982d5fa98e64d241249cfd6dd024e70ae20d0c3
| 2023-06-01T08:36:02Z |
python
| 2023-06-01T20:58:06Z |
lib/ansible/galaxy/collection/concrete_artifact_manager.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Concrete collection candidate management helper module."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import subprocess
import typing as t
from contextlib import contextmanager
from hashlib import sha256
from urllib.error import URLError
from urllib.parse import urldefrag
from shutil import rmtree
from tempfile import mkdtemp
if t.TYPE_CHECKING:
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.token import GalaxyToken
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.api import should_retry_error
from ansible.galaxy.dependency_resolution.dataclasses import _GALAXY_YAML
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.api import retry_with_delays_and_condition
from ansible.module_utils.api import generate_jittered_backoff
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
import yaml
display = Display()
MANIFEST_FILENAME = 'MANIFEST.json'
class ConcreteArtifactsManager:
"""Manager for on-disk collection artifacts.
It is responsible for:
* downloading remote collections from Galaxy-compatible servers and
direct links to tarballs or SCM repositories
* keeping track of local ones
* keeping track of Galaxy API tokens for downloads from Galaxy'ish
as well as the artifact hashes
* keeping track of Galaxy API signatures for downloads from Galaxy'ish
* caching all of above
* retrieving the metadata out of the downloaded artifacts
"""
def __init__(self, b_working_directory, validate_certs=True, keyring=None, timeout=60, required_signature_count=None, ignore_signature_errors=None):
# type: (bytes, bool, str, int, str, list[str]) -> None
"""Initialize ConcreteArtifactsManager caches and costraints."""
self._validate_certs = validate_certs # type: bool
self._artifact_cache = {} # type: dict[bytes, bytes]
self._galaxy_artifact_cache = {} # type: dict[Candidate | Requirement, bytes]
self._artifact_meta_cache = {} # type: dict[bytes, dict[str, str | list[str] | dict[str, str] | None | t.Type[Sentinel]]]
self._galaxy_collection_cache = {} # type: dict[Candidate | Requirement, tuple[str, str, GalaxyToken]]
self._galaxy_collection_origin_cache = {} # type: dict[Candidate, tuple[str, list[dict[str, str]]]]
self._b_working_directory = b_working_directory # type: bytes
self._supplemental_signature_cache = {} # type: dict[str, str]
self._keyring = keyring # type: str
self.timeout = timeout # type: int
self._required_signature_count = required_signature_count # type: str
self._ignore_signature_errors = ignore_signature_errors # type: list[str]
self._require_build_metadata = True # type: bool
@property
def keyring(self):
return self._keyring
@property
def required_successful_signature_count(self):
return self._required_signature_count
@property
def ignore_signature_errors(self):
if self._ignore_signature_errors is None:
return []
return self._ignore_signature_errors
@property
def require_build_metadata(self):
# type: () -> bool
return self._require_build_metadata
@require_build_metadata.setter
def require_build_metadata(self, value):
# type: (bool) -> None
self._require_build_metadata = value
def get_galaxy_artifact_source_info(self, collection):
# type: (Candidate) -> dict[str, t.Union[str, list[dict[str, str]]]]
server = collection.src.api_server
try:
download_url = self._galaxy_collection_cache[collection][0]
signatures_url, signatures = self._galaxy_collection_origin_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
return {
"format_version": "1.0.0",
"namespace": collection.namespace,
"name": collection.name,
"version": collection.ver,
"server": server,
"version_url": signatures_url,
"download_url": download_url,
"signatures": signatures,
}
def get_galaxy_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a Galaxy-stored collection, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._galaxy_artifact_cache[collection]
except KeyError:
pass
try:
url, sha256_hash, token = self._galaxy_collection_cache[collection]
except KeyError as key_err:
raise RuntimeError(
'The is no known source for {coll!s}'.
format(coll=collection),
) from key_err
display.vvvv(
"Fetching a collection tarball for '{collection!s}' from "
'Ansible Galaxy'.format(collection=collection),
)
try:
b_artifact_path = _download_file(
url,
self._b_working_directory,
expected_hash=sha256_hash,
validate_certs=self._validate_certs,
token=token,
) # type: bytes
except URLError as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
except Exception as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}' due to the following unforeseen error: "
'{download_err!s}'.
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
else:
display.vvv(
"Collection '{coll!s}' obtained from "
'server {server!s} {url!s}'.format(
coll=collection, server=collection.src or 'Galaxy',
url=collection.src.api_server if collection.src is not None
else '',
)
)
self._galaxy_artifact_cache[collection] = b_artifact_path
return b_artifact_path
def get_artifact_path(self, collection):
# type: (t.Union[Candidate, Requirement]) -> bytes
"""Given a concrete collection pointer, return a cached path.
If it's not yet on disk, this method downloads the artifact first.
"""
try:
return self._artifact_cache[collection.src]
except KeyError:
pass
# NOTE: SCM needs to be special-cased as it may contain either
# NOTE: one collection in its root, or a number of top-level
# NOTE: collection directories instead.
# NOTE: The idea is to store the SCM collection as unpacked
# NOTE: directory structure under the temporary location and use
# NOTE: a "virtual" collection that has pinned requirements on
# NOTE: the directories under that SCM checkout that correspond
# NOTE: to collections.
# NOTE: This brings us to the idea that we need two separate
# NOTE: virtual Requirement/Candidate types --
# NOTE: (single) dir + (multidir) subdirs
if collection.is_url:
display.vvvv(
"Collection requirement '{collection!s}' is a URL "
'to a tar artifact'.format(collection=collection.fqcn),
)
try:
b_artifact_path = _download_file(
collection.src,
self._b_working_directory,
expected_hash=None, # NOTE: URLs don't support checksums
validate_certs=self._validate_certs,
timeout=self.timeout
)
except Exception as err:
raise AnsibleError(
'Failed to download collection tar '
"from '{coll_src!s}': {download_err!s}".
format(
coll_src=to_native(collection.src),
download_err=to_native(err),
),
) from err
elif collection.is_scm:
b_artifact_path = _extract_collection_from_git(
collection.src,
collection.ver,
self._b_working_directory,
)
elif collection.is_file or collection.is_dir or collection.is_subdirs:
b_artifact_path = to_bytes(collection.src)
else:
# NOTE: This may happen `if collection.is_online_index_pointer`
raise RuntimeError(
'The artifact is of an unexpected type {art_type!s}'.
format(art_type=collection.type)
)
self._artifact_cache[collection.src] = b_artifact_path
return b_artifact_path
def _get_direct_collection_namespace(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['namespace'] # type: ignore[return-value]
def _get_direct_collection_name(self, collection):
# type: (Candidate) -> t.Optional[str]
return self.get_direct_collection_meta(collection)['name'] # type: ignore[return-value]
def get_direct_collection_fqcn(self, collection):
# type: (Candidate) -> t.Optional[str]
"""Extract FQCN from the given on-disk collection artifact.
If the collection is virtual, ``None`` is returned instead
of a string.
"""
if collection.is_virtual:
# NOTE: should it be something like "<virtual>"?
return None
return '.'.join(( # type: ignore[type-var]
self._get_direct_collection_namespace(collection), # type: ignore[arg-type]
self._get_direct_collection_name(collection),
))
def get_direct_collection_version(self, collection):
# type: (t.Union[Candidate, Requirement]) -> str
"""Extract version from the given on-disk collection artifact."""
return self.get_direct_collection_meta(collection)['version'] # type: ignore[return-value]
def get_direct_collection_dependencies(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, str]
"""Extract deps from the given on-disk collection artifact."""
collection_dependencies = self.get_direct_collection_meta(collection)['dependencies']
if collection_dependencies is None:
collection_dependencies = {}
return collection_dependencies # type: ignore[return-value]
def get_direct_collection_meta(self, collection):
# type: (t.Union[Candidate, Requirement]) -> dict[str, t.Union[str, dict[str, str], list[str], None, t.Type[Sentinel]]]
"""Extract meta from the given on-disk collection artifact."""
try: # FIXME: use unique collection identifier as a cache key?
return self._artifact_meta_cache[collection.src]
except KeyError:
b_artifact_path = self.get_artifact_path(collection)
if collection.is_url or collection.is_file:
collection_meta = _get_meta_from_tar(b_artifact_path)
elif collection.is_dir: # should we just build a coll instead?
# FIXME: what if there's subdirs?
try:
collection_meta = _get_meta_from_dir(b_artifact_path, self.require_build_metadata)
except LookupError as lookup_err:
raise AnsibleError(
'Failed to find the collection dir deps: {err!s}'.
format(err=to_native(lookup_err)),
) from lookup_err
elif collection.is_scm:
collection_meta = {
'name': None,
'namespace': None,
'dependencies': {to_native(b_artifact_path): '*'},
'version': '*',
}
elif collection.is_subdirs:
collection_meta = {
'name': None,
'namespace': None,
# NOTE: Dropping b_artifact_path since it's based on src anyway
'dependencies': dict.fromkeys(
map(to_native, collection.namespace_collection_paths),
'*',
),
'version': '*',
}
else:
raise RuntimeError
self._artifact_meta_cache[collection.src] = collection_meta
return collection_meta
def save_collection_source(self, collection, url, sha256_hash, token, signatures_url, signatures):
# type: (Candidate, str, str, GalaxyToken, str, list[dict[str, str]]) -> None
"""Store collection URL, SHA256 hash and Galaxy API token.
This is a hook that is supposed to be called before attempting to
download Galaxy-based collections with ``get_galaxy_artifact_path()``.
"""
self._galaxy_collection_cache[collection] = url, sha256_hash, token
self._galaxy_collection_origin_cache[collection] = signatures_url, signatures
@classmethod
@contextmanager
def under_tmpdir(
cls,
temp_dir_base, # type: str
validate_certs=True, # type: bool
keyring=None, # type: str
required_signature_count=None, # type: str
ignore_signature_errors=None, # type: list[str]
require_build_metadata=True, # type: bool
): # type: (...) -> t.Iterator[ConcreteArtifactsManager]
"""Custom ConcreteArtifactsManager constructor with temp dir.
This method returns a context manager that allocates and cleans
up a temporary directory for caching the collection artifacts
during the dependency resolution process.
"""
# NOTE: Can't use `with tempfile.TemporaryDirectory:`
# NOTE: because it's not in Python 2 stdlib.
temp_path = mkdtemp(
dir=to_bytes(temp_dir_base, errors='surrogate_or_strict'),
)
b_temp_path = to_bytes(temp_path, errors='surrogate_or_strict')
try:
yield cls(
b_temp_path,
validate_certs,
keyring=keyring,
required_signature_count=required_signature_count,
ignore_signature_errors=ignore_signature_errors
)
finally:
rmtree(b_temp_path)
def parse_scm(collection, version):
"""Extract name, version, path and subdir out of the SCM pointer."""
if ',' in collection:
collection, version = collection.split(',', 1)
elif version == '*' or not version:
version = 'HEAD'
if collection.startswith('git+'):
path = collection[4:]
else:
path = collection
path, fragment = urldefrag(path)
fragment = fragment.strip(os.path.sep)
if path.endswith(os.path.sep + '.git'):
name = path.split(os.path.sep)[-2]
elif '://' not in path and '@' not in path:
name = path
else:
name = path.split('/')[-1]
if name.endswith('.git'):
name = name[:-4]
return name, version, path, fragment
def _extract_collection_from_git(repo_url, coll_ver, b_path):
name, version, git_url, fragment = parse_scm(repo_url, coll_ver)
b_checkout_path = mkdtemp(
dir=b_path,
prefix=to_bytes(name, errors='surrogate_or_strict'),
) # type: bytes
try:
git_executable = get_bin_path('git')
except ValueError as err:
raise AnsibleError(
"Could not find git executable to extract the collection from the Git repository `{repo_url!s}`.".
format(repo_url=to_native(git_url))
) from err
# Perform a shallow clone if simply cloning HEAD
if version == 'HEAD':
git_clone_cmd = git_executable, 'clone', '--depth=1', git_url, to_text(b_checkout_path)
else:
git_clone_cmd = git_executable, 'clone', git_url, to_text(b_checkout_path)
# FIXME: '--branch', version
try:
subprocess.check_call(git_clone_cmd)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to clone a Git repository from `{repo_url!s}`.'.
format(repo_url=to_native(git_url)),
) from proc_err
git_switch_cmd = git_executable, 'checkout', to_text(version)
try:
subprocess.check_call(git_switch_cmd, cwd=b_checkout_path)
except subprocess.CalledProcessError as proc_err:
raise AnsibleError( # should probably be LookupError
'Failed to switch a cloned Git repo `{repo_url!s}` '
'to the requested revision `{commitish!s}`.'.
format(
commitish=to_native(version),
repo_url=to_native(git_url),
),
) from proc_err
return (
os.path.join(b_checkout_path, to_bytes(fragment))
if fragment else b_checkout_path
)
# FIXME: use random subdirs while preserving the file names
@retry_with_delays_and_condition(
backoff_iterator=generate_jittered_backoff(retries=6, delay_base=2, delay_threshold=40),
should_retry_error=should_retry_error
)
def _download_file(url, b_path, expected_hash, validate_certs, token=None, timeout=60):
# type: (str, bytes, t.Optional[str], bool, GalaxyToken, int) -> bytes
# ^ NOTE: used in download and verify_collections ^
b_tarball_name = to_bytes(
url.rsplit('/', 1)[1], errors='surrogate_or_strict',
)
b_file_name = b_tarball_name[:-len('.tar.gz')]
b_tarball_dir = mkdtemp(
dir=b_path,
prefix=b'-'.join((b_file_name, b'')),
) # type: bytes
b_file_path = os.path.join(b_tarball_dir, b_tarball_name)
display.display("Downloading %s to %s" % (url, to_text(b_tarball_dir)))
# NOTE: Galaxy redirects downloads to S3 which rejects the request
# NOTE: if an Authorization header is attached so don't redirect it
try:
resp = open_url(
to_native(url, errors='surrogate_or_strict'),
validate_certs=validate_certs,
headers=None if token is None else token.headers(),
unredirected_headers=['Authorization'], http_agent=user_agent(),
timeout=timeout
)
except Exception as err:
raise AnsibleError(to_native(err), orig_exc=err)
with open(b_file_path, 'wb') as download_file: # type: t.BinaryIO
actual_hash = _consume_file(resp, write_to=download_file)
if expected_hash:
display.vvvv(
'Validating downloaded file hash {actual_hash!s} with '
'expected hash {expected_hash!s}'.
format(actual_hash=actual_hash, expected_hash=expected_hash)
)
if expected_hash != actual_hash:
raise AnsibleError('Mismatch artifact hash with downloaded file')
return b_file_path
def _consume_file(read_from, write_to=None):
# type: (t.BinaryIO, t.BinaryIO) -> str
bufsize = 65536
sha256_digest = sha256()
data = read_from.read(bufsize)
while data:
if write_to is not None:
write_to.write(data)
write_to.flush()
sha256_digest.update(data)
data = read_from.read(bufsize)
return sha256_digest.hexdigest()
def _normalize_galaxy_yml_manifest(
galaxy_yml, # type: dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
b_galaxy_yml_path, # type: bytes
require_build_metadata=True, # type: bool
):
# type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml_schema = (
get_collections_galaxy_meta_info()
) # type: list[dict[str, t.Any]] # FIXME: <--
# FIXME: 👆maybe precise type: list[dict[str, t.Union[bool, str, list[str]]]]
mandatory_keys = set()
string_keys = set() # type: set[str]
list_keys = set() # type: set[str]
dict_keys = set() # type: set[str]
sentinel_keys = set() # type: set[str]
for info in galaxy_yml_schema:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
'sentinel': sentinel_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(mandatory_keys | string_keys | list_keys | dict_keys | sentinel_keys)
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
msg = (
"The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys)))
)
if require_build_metadata:
raise AnsibleError(msg)
display.warning(msg)
raise ValueError(msg)
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val] # type: ignore[list-item]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
for optional_sentinel in sentinel_keys:
if optional_sentinel not in galaxy_yml:
galaxy_yml[optional_sentinel] = Sentinel
# NOTE: `version: null` is only allowed for `galaxy.yml`
# NOTE: and not `MANIFEST.json`. The use-case for it is collections
# NOTE: that generate the version from Git before building a
# NOTE: distributable tarball artifact.
if not galaxy_yml.get('version'):
galaxy_yml['version'] = '*'
return galaxy_yml
def _get_meta_from_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
try:
return _get_meta_from_installed_dir(b_path)
except LookupError:
return _get_meta_from_src_dir(b_path, require_build_metadata)
def _get_meta_from_src_dir(
b_path, # type: bytes
require_build_metadata=True, # type: bool
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
galaxy_yml = os.path.join(b_path, _GALAXY_YAML)
if not os.path.isfile(galaxy_yml):
raise LookupError(
"The collection galaxy.yml path '{path!s}' does not exist.".
format(path=to_native(galaxy_yml))
)
with open(galaxy_yml, 'rb') as manifest_file_obj:
try:
manifest = yaml_load(manifest_file_obj)
except yaml.error.YAMLError as yaml_err:
raise AnsibleError(
"Failed to parse the galaxy.yml at '{path!s}' with "
'the following error:\n{err_txt!s}'.
format(
path=to_native(galaxy_yml),
err_txt=to_native(yaml_err),
),
) from yaml_err
if not isinstance(manifest, dict):
if require_build_metadata:
raise AnsibleError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
# Valid build metadata is not required by ansible-galaxy list. Raise ValueError to fall back to implicit metadata.
display.warning(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
raise ValueError(f"The collection galaxy.yml at '{to_native(galaxy_yml)}' is incorrectly formatted.")
return _normalize_galaxy_yml_manifest(manifest, galaxy_yml, require_build_metadata)
def _get_json_from_installed_dir(
b_path, # type: bytes
filename, # type: str
): # type: (...) -> dict
b_json_filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
try:
with open(b_json_filepath, 'rb') as manifest_fd:
b_json_text = manifest_fd.read()
except (IOError, OSError):
raise LookupError(
"The collection {manifest!s} path '{path!s}' does not exist.".
format(
manifest=filename,
path=to_native(b_json_filepath),
)
)
manifest_txt = to_text(b_json_text, errors='surrogate_or_strict')
try:
manifest = json.loads(manifest_txt)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=filename),
)
return manifest
def _get_meta_from_installed_dir(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
manifest = _get_json_from_installed_dir(b_path, MANIFEST_FILENAME)
collection_info = manifest['collection_info']
version = collection_info.get('version')
if not version:
raise AnsibleError(
u'Collection metadata file `{manifest_filename!s}` at `{meta_file!s}` is expected '
u'to have a valid SemVer version value but got {version!s}'.
format(
manifest_filename=MANIFEST_FILENAME,
meta_file=to_text(b_path),
version=to_text(repr(version)),
),
)
return collection_info
def _get_meta_from_tar(
b_path, # type: bytes
): # type: (...) -> dict[str, t.Union[str, list[str], dict[str, str], None, t.Type[Sentinel]]]
if not tarfile.is_tarfile(b_path):
raise AnsibleError(
"Collection artifact at '{path!s}' is not a valid tar file.".
format(path=to_native(b_path)),
)
with tarfile.open(b_path, mode='r') as collection_tar: # type: tarfile.TarFile
try:
member = collection_tar.getmember(MANIFEST_FILENAME)
except KeyError:
raise AnsibleError(
"Collection at '{path!s}' does not contain the "
'required file {manifest_file!s}.'.
format(
path=to_native(b_path),
manifest_file=MANIFEST_FILENAME,
),
)
with _tarfile_extract(collection_tar, member) as (_member, member_obj):
if member_obj is None:
raise AnsibleError(
'Collection tar file does not contain '
'member {member!s}'.format(member=MANIFEST_FILENAME),
)
text_content = to_text(
member_obj.read(),
errors='surrogate_or_strict',
)
try:
manifest = json.loads(text_content)
except ValueError:
raise AnsibleError(
'Collection tar file member {member!s} does not '
'contain a valid json string.'.
format(member=MANIFEST_FILENAME),
)
return manifest['collection_info']
@contextmanager
def _tarfile_extract(
tar, # type: tarfile.TarFile
member, # type: tarfile.TarInfo
):
# type: (...) -> t.Iterator[tuple[tarfile.TarInfo, t.Optional[t.IO[bytes]]]]
tar_obj = tar.extractfile(member)
try:
yield member, tar_obj
finally:
if tar_obj is not None:
tar_obj.close()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,943 |
Ansible-galaxy cannot install subdir requirements when upgraded to 8.0.0
|
### Summary
When trying to install multiple collections from local directory using `type: subdirs` in requirements.yml, `ansible-galaxy` throws raw python unhandled exception.
Problem started occur when updated ansible to `8.0.0`.
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 22.04, Debian Bullseye
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection init collections.testa && ansible-galaxy collection init collections.testb \
&& echo """
collections:
- source: ./collections
type: subdirs
""" > requirements.yml && ansible-galaxy install -r requirements.yml
```
### Expected Results
Same as in previous versions - installation of all collections inside the directory, using directory name as namespace.
### Actual Results
```console
ERROR! Unexpected Exception, this is probably a bug: endswith first arg must be str or a tuple of str, not bytes
to see the full traceback, use -vvv
```
```pytb
Traceback (most recent call last):
File ".../ansible/lib/ansible/cli/__init__.py", line 659, in cli_executor
exit_code = cli.run()
^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 715, in run
return context.CLIARGS['func']()
^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 117, in method_wrapper
return wrapped_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../ansible/bin/ansible-galaxy", line 1369, in execute_install
self._execute_install_collection(
File ".../ansible/bin/ansible-galaxy", line 1409, in _execute_install_collection
install_collections(
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 681, in install_collections
unsatisfied_requirements = set(
^^^^
File ".../ansible/lib/ansible/galaxy/collection/__init__.py", line 684, in <genexpr>
Requirement.from_dir_path(sub_coll, artifacts_manager)
File ".../ansible/lib/ansible/galaxy/dependency_resolution/dataclasses.py", line 221, in from_dir_path
if dir_path.endswith(to_bytes(os.path.sep)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: endswith first arg must be str or a tuple of str, not bytes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80943
|
https://github.com/ansible/ansible/pull/80949
|
c069cf88debe9f1b5d306ee93db366325f4d16e1
|
0982d5fa98e64d241249cfd6dd024e70ae20d0c3
| 2023-06-01T08:36:02Z |
python
| 2023-06-01T20:58:06Z |
lib/ansible/galaxy/dependency_resolution/dataclasses.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency structs."""
# FIXME: add caching all over the place
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import typing as t
from collections import namedtuple
from collections.abc import MutableSequence, MutableMapping
from glob import iglob
from urllib.parse import urlparse
from yaml import safe_load
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
Collection = t.TypeVar(
'Collection',
'Candidate', 'Requirement',
'_ComputedReqKindsMixin',
)
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import HAS_PACKAGING, PkgReq
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.common.arg_spec import ArgumentSpecValidator
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
_ALLOW_CONCRETE_POINTER_IN_SOURCE = False # NOTE: This is a feature flag
_GALAXY_YAML = b'galaxy.yml'
_MANIFEST_JSON = b'MANIFEST.json'
_SOURCE_METADATA_FILE = b'GALAXY.yml'
display = Display()
def get_validated_source_info(b_source_info_path, namespace, name, version):
source_info_path = to_text(b_source_info_path, errors='surrogate_or_strict')
if not os.path.isfile(b_source_info_path):
return None
try:
with open(b_source_info_path, mode='rb') as fd:
metadata = safe_load(fd)
except OSError as e:
display.warning(
f"Error getting collection source information at '{source_info_path}': {to_text(e, errors='surrogate_or_strict')}"
)
return None
if not isinstance(metadata, MutableMapping):
display.warning(f"Error getting collection source information at '{source_info_path}': expected a YAML dictionary")
return None
schema_errors = _validate_v1_source_info_schema(namespace, name, version, metadata)
if schema_errors:
display.warning(f"Ignoring source metadata file at {source_info_path} due to the following errors:")
display.warning("\n".join(schema_errors))
display.warning("Correct the source metadata file by reinstalling the collection.")
return None
return metadata
def _validate_v1_source_info_schema(namespace, name, version, provided_arguments):
argument_spec_data = dict(
format_version=dict(choices=["1.0.0"]),
download_url=dict(),
version_url=dict(),
server=dict(),
signatures=dict(
type=list,
suboptions=dict(
signature=dict(),
pubkey_fingerprint=dict(),
signing_service=dict(),
pulp_created=dict(),
)
),
name=dict(choices=[name]),
namespace=dict(choices=[namespace]),
version=dict(choices=[version]),
)
if not isinstance(provided_arguments, dict):
raise AnsibleError(
f'Invalid offline source info for {namespace}.{name}:{version}, expected a dict and got {type(provided_arguments)}'
)
validator = ArgumentSpecValidator(argument_spec_data)
validation_result = validator.validate(provided_arguments)
return validation_result.error_messages
def _is_collection_src_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _GALAXY_YAML))
def _is_installed_collection_dir(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
return os.path.isfile(os.path.join(b_dir_path, _MANIFEST_JSON))
def _is_collection_dir(dir_path):
return (
_is_installed_collection_dir(dir_path) or
_is_collection_src_dir(dir_path)
)
def _find_collections_in_subdirs(dir_path):
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
subdir_glob_pattern = os.path.join(
b_dir_path,
# b'*', # namespace is supposed to be top-level per spec
b'*', # collection name
)
for subdir in iglob(subdir_glob_pattern):
if os.path.isfile(os.path.join(subdir, _MANIFEST_JSON)):
yield subdir
elif os.path.isfile(os.path.join(subdir, _GALAXY_YAML)):
yield subdir
def _is_collection_namespace_dir(tested_str):
return any(_find_collections_in_subdirs(tested_str))
def _is_file_path(tested_str):
return os.path.isfile(to_bytes(tested_str, errors='surrogate_or_strict'))
def _is_http_url(tested_str):
return urlparse(tested_str).scheme.lower() in {'http', 'https'}
def _is_git_url(tested_str):
return tested_str.startswith(('git+', 'git@'))
def _is_concrete_artifact_pointer(tested_str):
return any(
predicate(tested_str)
for predicate in (
# NOTE: Maintain the checks to be sorted from light to heavy:
_is_git_url,
_is_http_url,
_is_file_path,
_is_collection_dir,
_is_collection_namespace_dir,
)
)
class _ComputedReqKindsMixin:
UNIQUE_ATTRS = ('fqcn', 'ver', 'src', 'type')
def __init__(self, *args, **kwargs):
if not self.may_have_offline_galaxy_info:
self._source_info = None
else:
info_path = self.construct_galaxy_info_path(to_bytes(self.src, errors='surrogate_or_strict'))
self._source_info = get_validated_source_info(
info_path,
self.namespace,
self.name,
self.ver
)
def __hash__(self):
return hash(tuple(getattr(self, attr) for attr in _ComputedReqKindsMixin.UNIQUE_ATTRS))
def __eq__(self, candidate):
return hash(self) == hash(candidate)
@classmethod
def from_dir_path_as_unknown( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
art_mgr, # type: ConcreteArtifactsManager
): # type: (...) -> Collection
"""Make collection from an unspecified dir type.
This alternative constructor attempts to grab metadata from the
given path if it's a directory. If there's no metadata, it
falls back to guessing the FQCN based on the directory path and
sets the version to "*".
It raises a ValueError immediately if the input is not an
existing directory path.
"""
if not os.path.isdir(dir_path):
raise ValueError(
"The collection directory '{path!s}' doesn't exist".
format(path=to_native(dir_path)),
)
try:
return cls.from_dir_path(dir_path, art_mgr)
except ValueError:
return cls.from_dir_path_implicit(dir_path)
@classmethod
def from_dir_path(cls, dir_path, art_mgr):
"""Make collection from an directory with metadata."""
if dir_path.endswith(to_bytes(os.path.sep)):
dir_path = dir_path.rstrip(to_bytes(os.path.sep))
b_dir_path = to_bytes(dir_path, errors='surrogate_or_strict')
if not _is_collection_dir(b_dir_path):
display.warning(
u"Collection at '{path!s}' does not have a {manifest_json!s} "
u'file, nor has it {galaxy_yml!s}: cannot detect version.'.
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
'`dir_path` argument must be an installed or a source'
' collection directory.',
)
tmp_inst_req = cls(None, None, dir_path, 'dir', None)
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
try:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req)
except TypeError as err:
# Looks like installed/source dir but isn't: doesn't have valid metadata.
display.warning(
u"Collection at '{path!s}' has a {manifest_json!s} "
u"or {galaxy_yml!s} file but it contains invalid metadata.".
format(
galaxy_yml=to_text(_GALAXY_YAML),
manifest_json=to_text(_MANIFEST_JSON),
path=to_text(dir_path, errors='surrogate_or_strict'),
),
)
raise ValueError(
"Collection at '{path!s}' has invalid metadata".
format(path=to_text(dir_path, errors='surrogate_or_strict'))
) from err
return cls(req_name, req_version, dir_path, 'dir', None)
@classmethod
def from_dir_path_implicit( # type: ignore[misc]
cls, # type: t.Type[Collection]
dir_path, # type: bytes
): # type: (...) -> Collection
"""Construct a collection instance based on an arbitrary dir.
This alternative constructor infers the FQCN based on the parent
and current directory names. It also sets the version to "*"
regardless of whether any of known metadata files are present.
"""
# There is no metadata, but it isn't required for a functional collection. Determine the namespace.name from the path.
if dir_path.endswith(to_bytes(os.path.sep)):
dir_path = dir_path.rstrip(to_bytes(os.path.sep))
u_dir_path = to_text(dir_path, errors='surrogate_or_strict')
path_list = u_dir_path.split(os.path.sep)
req_name = '.'.join(path_list[-2:])
return cls(req_name, '*', dir_path, 'dir', None) # type: ignore[call-arg]
@classmethod
def from_string(cls, collection_input, artifacts_manager, supplemental_signatures):
req = {}
if _is_concrete_artifact_pointer(collection_input) or AnsibleCollectionRef.is_valid_collection_name(collection_input):
# Arg is a file path or URL to a collection, or just a collection
req['name'] = collection_input
elif ':' in collection_input:
req['name'], _sep, req['version'] = collection_input.partition(':')
if not req['version']:
del req['version']
else:
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
try:
pkg_req = PkgReq(collection_input)
except Exception as e:
# packaging doesn't know what this is, let it fly, better errors happen in from_requirement_dict
req['name'] = collection_input
else:
req['name'] = pkg_req.name
if pkg_req.specifier:
req['version'] = to_text(pkg_req.specifier)
req['signatures'] = supplemental_signatures
return cls.from_requirement_dict(req, artifacts_manager)
@classmethod
def from_requirement_dict(cls, collection_req, art_mgr, validate_signature_options=True):
req_name = collection_req.get('name', None)
req_version = collection_req.get('version', '*')
req_type = collection_req.get('type')
# TODO: decide how to deprecate the old src API behavior
req_source = collection_req.get('source', None)
req_signature_sources = collection_req.get('signatures', None)
if req_signature_sources is not None:
if validate_signature_options and art_mgr.keyring is None:
raise AnsibleError(
f"Signatures were provided to verify {req_name} but no keyring was configured."
)
if not isinstance(req_signature_sources, MutableSequence):
req_signature_sources = [req_signature_sources]
req_signature_sources = frozenset(req_signature_sources)
if req_type is None:
if ( # FIXME: decide on the future behavior:
_ALLOW_CONCRETE_POINTER_IN_SOURCE
and req_source is not None
and _is_concrete_artifact_pointer(req_source)
):
src_path = req_source
elif (
req_name is not None
and AnsibleCollectionRef.is_valid_collection_name(req_name)
):
req_type = 'galaxy'
elif (
req_name is not None
and _is_concrete_artifact_pointer(req_name)
):
src_path, req_name = req_name, None
else:
dir_tip_tmpl = ( # NOTE: leading LFs are for concat
'\n\nTip: Make sure you are pointing to the right '
'subdirectory — `{src!s}` looks like a directory '
'but it is neither a collection, nor a namespace '
'dir.'
)
if req_source is not None and os.path.isdir(req_source):
tip = dir_tip_tmpl.format(src=req_source)
elif req_name is not None and os.path.isdir(req_name):
tip = dir_tip_tmpl.format(src=req_name)
elif req_name:
tip = '\n\nCould not find {0}.'.format(req_name)
else:
tip = ''
raise AnsibleError( # NOTE: I'd prefer a ValueError instead
'Neither the collection requirement entry key '
"'name', nor 'source' point to a concrete "
"resolvable collection artifact. Also 'name' is "
'not an FQCN. A valid collection name must be in '
'the format <namespace>.<collection>. Please make '
'sure that the namespace and the collection name '
'contain characters from [a-zA-Z0-9_] only.'
'{extra_tip!s}'.format(extra_tip=tip),
)
if req_type is None:
if _is_git_url(src_path):
req_type = 'git'
req_source = src_path
elif _is_http_url(src_path):
req_type = 'url'
req_source = src_path
elif _is_file_path(src_path):
req_type = 'file'
req_source = src_path
elif _is_collection_dir(src_path):
if _is_installed_collection_dir(src_path) and _is_collection_src_dir(src_path):
# Note that ``download`` requires a dir with a ``galaxy.yml`` and fails if it
# doesn't exist, but if a ``MANIFEST.json`` also exists, it would be used
# instead of the ``galaxy.yml``.
raise AnsibleError(
u"Collection requirement at '{path!s}' has both a {manifest_json!s} "
u"file and a {galaxy_yml!s}.\nThe requirement must either be an installed "
u"collection directory or a source collection directory, not both.".
format(
path=to_text(src_path, errors='surrogate_or_strict'),
manifest_json=to_text(_MANIFEST_JSON),
galaxy_yml=to_text(_GALAXY_YAML),
)
)
req_type = 'dir'
req_source = src_path
elif _is_collection_namespace_dir(src_path):
req_name = None # No name for a virtual req or "namespace."?
req_type = 'subdirs'
req_source = src_path
else:
raise AnsibleError( # NOTE: this is never supposed to be hit
'Failed to automatically detect the collection '
'requirement type.',
)
if req_type not in {'file', 'galaxy', 'git', 'url', 'dir', 'subdirs'}:
raise AnsibleError(
"The collection requirement entry key 'type' must be "
'one of file, galaxy, git, dir, subdirs, or url.'
)
if req_name is None and req_type == 'galaxy':
raise AnsibleError(
'Collections requirement entry should contain '
"the key 'name' if it's requested from a Galaxy-like "
'index server.',
)
if req_type != 'galaxy' and req_source is None:
req_source, req_name = req_name, None
if (
req_type == 'galaxy' and
isinstance(req_source, GalaxyAPI) and
not _is_http_url(req_source.api_server)
):
raise AnsibleError(
"Collections requirement 'source' entry should contain "
'a valid Galaxy API URL but it does not: {not_url!s} '
'is not an HTTP URL.'.
format(not_url=req_source.api_server),
)
if req_type == 'dir' and req_source.endswith(os.path.sep):
req_source = req_source.rstrip(os.path.sep)
tmp_inst_req = cls(req_name, req_version, req_source, req_type, req_signature_sources)
if req_type not in {'galaxy', 'subdirs'} and req_name is None:
req_name = art_mgr.get_direct_collection_fqcn(tmp_inst_req) # TODO: fix the cache key in artifacts manager?
if req_type not in {'galaxy', 'subdirs'} and req_version == '*':
req_version = art_mgr.get_direct_collection_version(tmp_inst_req)
return cls(
req_name, req_version,
req_source, req_type,
req_signature_sources,
)
def __repr__(self):
return (
'<{self!s} of type {coll_type!r} from {src!s}>'.
format(self=self, coll_type=self.type, src=self.src or 'Galaxy')
)
def __str__(self):
return to_native(self.__unicode__())
def __unicode__(self):
if self.fqcn is None:
return (
u'"virtual collection Git repo"' if self.is_scm
else u'"virtual collection namespace"'
)
return (
u'{fqcn!s}:{ver!s}'.
format(fqcn=to_text(self.fqcn), ver=to_text(self.ver))
)
@property
def may_have_offline_galaxy_info(self):
if self.fqcn is None:
# Virtual collection
return False
elif not self.is_dir or self.src is None or not _is_collection_dir(self.src):
# Not a dir or isn't on-disk
return False
return True
def construct_galaxy_info_path(self, b_collection_path):
if not self.may_have_offline_galaxy_info and not self.type == 'galaxy':
raise TypeError('Only installed collections from a Galaxy server have offline Galaxy info')
# Store Galaxy metadata adjacent to the namespace of the collection
# Chop off the last two parts of the path (/ns/coll) to get the dir containing the ns
b_src = to_bytes(b_collection_path, errors='surrogate_or_strict')
b_path_parts = b_src.split(to_bytes(os.path.sep))[0:-2]
b_metadata_dir = to_bytes(os.path.sep).join(b_path_parts)
# ns.coll-1.0.0.info
b_dir_name = to_bytes(f"{self.namespace}.{self.name}-{self.ver}.info", errors="surrogate_or_strict")
# collections/ansible_collections/ns.coll-1.0.0.info/GALAXY.yml
return os.path.join(b_metadata_dir, b_dir_name, _SOURCE_METADATA_FILE)
def _get_separate_ns_n_name(self): # FIXME: use LRU cache
return self.fqcn.split('.')
@property
def namespace(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a namespace')
return self._get_separate_ns_n_name()[0]
@property
def name(self):
if self.is_virtual:
raise TypeError('Virtual collections do not have a name')
return self._get_separate_ns_n_name()[-1]
@property
def canonical_package_id(self):
if not self.is_virtual:
return to_native(self.fqcn)
return (
'<virtual namespace from {src!s} of type {src_type!s}>'.
format(src=to_native(self.src), src_type=to_native(self.type))
)
@property
def is_virtual(self):
return self.is_scm or self.is_subdirs
@property
def is_file(self):
return self.type == 'file'
@property
def is_dir(self):
return self.type == 'dir'
@property
def namespace_collection_paths(self):
return [
to_native(path)
for path in _find_collections_in_subdirs(self.src)
]
@property
def is_subdirs(self):
return self.type == 'subdirs'
@property
def is_url(self):
return self.type == 'url'
@property
def is_scm(self):
return self.type == 'git'
@property
def is_concrete_artifact(self):
return self.type in {'git', 'url', 'file', 'dir', 'subdirs'}
@property
def is_online_index_pointer(self):
return not self.is_concrete_artifact
@property
def source_info(self):
return self._source_info
RequirementNamedTuple = namedtuple('Requirement', ('fqcn', 'ver', 'src', 'type', 'signature_sources')) # type: ignore[name-match]
CandidateNamedTuple = namedtuple('Candidate', ('fqcn', 'ver', 'src', 'type', 'signatures')) # type: ignore[name-match]
class Requirement(
_ComputedReqKindsMixin,
RequirementNamedTuple,
):
"""An abstract requirement request."""
def __new__(cls, *args, **kwargs):
self = RequirementNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Requirement, self).__init__()
class Candidate(
_ComputedReqKindsMixin,
CandidateNamedTuple,
):
"""A concrete collection candidate with its version resolved."""
def __new__(cls, *args, **kwargs):
self = CandidateNamedTuple.__new__(cls, *args, **kwargs)
return self
def __init__(self, *args, **kwargs):
super(Candidate, self).__init__()
def with_signatures_repopulated(self): # type: (Candidate) -> Candidate
"""Populate a new Candidate instance with Galaxy signatures.
:raises AnsibleAssertionError: If the supplied candidate is not sourced from a Galaxy-like index.
"""
if self.type != 'galaxy':
raise AnsibleAssertionError(f"Invalid collection type for {self!r}: unable to get signatures from a galaxy server.")
signatures = self.src.get_collection_signatures(self.namespace, self.name, self.ver)
return self.__class__(self.fqcn, self.ver, self.src, self.type, frozenset([*self.signatures, *signatures]))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,914 |
Please document `argument_spec`
|
### Summary
The [`validate_argument_spec` module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/validate_argument_spec_module.html) accepts the `argument_spec` argument.
This is all the page states:
> argument_spec : A dictionary like AnsibleModule argument_spec
But I can't find the definition of `argument_spec` anywhere.
It would be appreciated if it were documented. Thank you.
### Issue Type
Documentation Report
### Component Name
validate_argument_spec
### Ansible Version
```console
2.14.5
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
It is not clear what goes into that argument. There are two examples, but presumable they are not comprehensive.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80914
|
https://github.com/ansible/ansible/pull/80967
|
9a87ae44068710be427da6dd05d678a3ad6241c3
|
9e14a85fe38631714b3a4e1a4ef1ab74ab63b430
| 2023-05-29T14:03:50Z |
python
| 2023-06-07T15:51:26Z |
lib/ansible/modules/validate_argument_spec.py
|
# -*- coding: utf-8 -*-
# Copyright 2021 Red Hat
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: validate_argument_spec
short_description: Validate role argument specs.
description:
- This module validates role arguments with a defined argument specification.
version_added: "2.11"
options:
argument_spec:
description:
- A dictionary like AnsibleModule argument_spec
required: true
provided_arguments:
description:
- A dictionary of the arguments that will be validated according to argument_spec
author:
- Ansible Core Team
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.conn
- action_common_attributes.flow
attributes:
action:
support: full
async:
support: none
become:
support: none
bypass_host_loop:
support: none
connection:
support: none
check_mode:
support: full
delegation:
support: none
diff_mode:
support: none
platform:
platforms: all
'''
EXAMPLES = r'''
- name: verify vars needed for this task file are present when included
ansible.builtin.validate_argument_spec:
argument_spec: '{{required_data}}'
vars:
required_data:
# unlike spec file, just put the options in directly
stuff:
description: stuff
type: str
choices: ['who', 'knows', 'what']
default: what
but:
description: i guess we need one
type: str
required: true
- name: verify vars needed for this task file are present when included, with spec from a spec file
ansible.builtin.validate_argument_spec:
argument_spec: "{{(lookup('ansible.builtin.file', 'myargspec.yml') | from_yaml )['specname']['options']}}"
- name: verify vars needed for next include and not from inside it, also with params i'll only define there
block:
- ansible.builtin.validate_argument_spec:
argument_spec: "{{lookup('ansible.builtin.file', 'nakedoptions.yml'}}"
provided_arguments:
but: "that i can define on the include itself, like in it's C(vars:) keyword"
- name: the include itself
vars:
stuff: knows
but: nobuts!
'''
RETURN = r'''
argument_errors:
description: A list of arg validation errors.
returned: failure
type: list
elements: str
sample:
- "error message 1"
- "error message 2"
argument_spec_data:
description: A dict of the data from the 'argument_spec' arg.
returned: failure
type: dict
sample:
some_arg:
type: "str"
some_other_arg:
type: "int"
required: true
validate_args_context:
description: A dict of info about where validate_args_spec was used
type: dict
returned: always
sample:
name: my_role
type: role
path: /home/user/roles/my_role/
argument_spec_name: main
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,063 |
/dev/null must be allowed for skeleton in user module
|
### Summary
If you set `skeleton` to `/dev/null` in `user` module you get an error saying that it's not a directory. However it's perfectly fine to do `useradd --skel /dev/null`, so ansible shouldn't be limiting the functionality of the `useradd` command.
`useradd --skel /dev/null` will cause nothing to be added to the new home directoy. It used to work correctly in older versions of ansible and should be fixed in this version.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
`/dev/null` should be allowed and result in no files being added to new home directory
### Actual Results
```console
It caused an error saying that /dev/null is not a directory
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75063
|
https://github.com/ansible/ansible/pull/75948
|
1ecc62ba0609cd75d798e309c6c8dd14958dd01a
|
25b3d3a6f78616534276d2559f952e5073a3ef60
| 2021-06-20T23:53:18Z |
python
| 2023-06-07T16:10:21Z |
changelogs/fragments/75063-allow-dev-nul-as-skeleton-for-new-homedir.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,063 |
/dev/null must be allowed for skeleton in user module
|
### Summary
If you set `skeleton` to `/dev/null` in `user` module you get an error saying that it's not a directory. However it's perfectly fine to do `useradd --skel /dev/null`, so ansible shouldn't be limiting the functionality of the `useradd` command.
`useradd --skel /dev/null` will cause nothing to be added to the new home directoy. It used to work correctly in older versions of ansible and should be fixed in this version.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
`/dev/null` should be allowed and result in no files being added to new home directory
### Actual Results
```console
It caused an error saying that /dev/null is not a directory
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75063
|
https://github.com/ansible/ansible/pull/75948
|
1ecc62ba0609cd75d798e309c6c8dd14958dd01a
|
25b3d3a6f78616534276d2559f952e5073a3ef60
| 2021-06-20T23:53:18Z |
python
| 2023-06-07T16:10:21Z |
lib/ansible/modules/user.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(true) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to.
- By default, the user is removed from all other groups. Configure C(append) to modify this.
- When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(true), add the user to the groups specified in C(groups).
- If C(false), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- If provided, set the user's password to the provided encrypted hash (Linux) or plain text password (macOS).
- B(Linux/Unix/POSIX:) Enter the hashed password as the value.
- See L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate the hash of a password.
- To create an account with a locked/disabled password on Linux systems, set this to C('!') or C('*').
- To create an account with a locked/disabled password on OpenBSD, set this to C('*************').
- B(OS X/macOS:) Enter the cleartext password as the value. Be sure to take relevant security precautions.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
- See this L(FAQ entry,https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#running-on-macos-as-a-target)
for additional requirements when removing users on macOS systems.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(false), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(true) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(true) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
- The default value depends on ssh-keygen.
type: int
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
password_expire_max:
description:
- Maximum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
password_expire_min:
description:
- Minimum number of days between password change.
- Supported on Linux only.
type: int
version_added: "2.11"
umask:
description:
- Sets the umask of the user.
- Does nothing when used with other platforms.
- Currently supported on Linux.
- Requires C(local) is omitted or False.
type: str
version_added: "2.12"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
- name: Set maximum expiration date for password
ansible.builtin.user:
name: ram19
password_expire_max: 10
- name: Set minimum expiration date for password
ansible.builtin.user:
name: pushkar15
password_expire_min: 5
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
'''
import ctypes.util
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.sys_info import get_platform_subclass
import ansible.module_utils.compat.typing as t
class StructSpwdType(ctypes.Structure):
_fields_ = [
('sp_namp', ctypes.c_char_p),
('sp_pwdp', ctypes.c_char_p),
('sp_lstchg', ctypes.c_long),
('sp_min', ctypes.c_long),
('sp_max', ctypes.c_long),
('sp_warn', ctypes.c_long),
('sp_inact', ctypes.c_long),
('sp_expire', ctypes.c_long),
('sp_flag', ctypes.c_ulong),
]
try:
_LIBC = ctypes.cdll.LoadLibrary(
t.cast(
str,
ctypes.util.find_library('c')
)
)
_LIBC.getspnam.argtypes = (ctypes.c_char_p,)
_LIBC.getspnam.restype = ctypes.POINTER(StructSpwdType)
HAVE_SPWD = True
except AttributeError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
def getspnam(b_name):
return _LIBC.getspnam(b_name).contents
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow' # type: str | None
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
self.password_expire_max = module.params['password_expire_max']
self.password_expire_min = module.params['password_expire_min']
self.umask = module.params['umask']
if self.umask is not None and self.local:
module.fail_json(msg="'umask' can not be used with 'local'")
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if self.local:
# luseradd uses -n instead of -N
cmd.append('-n')
else:
if os.path.exists('/etc/redhat-release'):
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.version()
major_release = int(dist.split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(ginfo[2])
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True, names_only=False):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
group_names = set()
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_info = self.group_info(g)
if info and remove_existing and group_info[2] == info[3]:
groups.remove(g)
elif names_only:
group_names.add(group_info[0])
if names_only:
return group_names
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def set_password_expire(self):
min_needs_change = self.password_expire_min is not None
max_needs_change = self.password_expire_max is not None
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
except ValueError:
return None, '', ''
min_needs_change &= self.password_expire_min != shadow_info.sp_min
max_needs_change &= self.password_expire_max != shadow_info.sp_max
if not (min_needs_change or max_needs_change):
return (None, '', '') # target state already reached
command_name = 'chage'
cmd = [self.module.get_bin_path(command_name, True)]
if min_needs_change:
cmd.extend(["-m", self.password_expire_min])
if max_needs_change:
cmd.extend(["-M", self.password_expire_max])
cmd.append(self.name)
return self.execute_command(cmd)
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
shadow_info = getspnam(to_bytes(self.name))
passwd = to_native(shadow_info.sp_pwdp)
expires = shadow_info.sp_expire
return passwd, expires
except ValueError:
return passwd, expires
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = get_best_parsable_locale(self.module)
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r_list = select.select([master_out_fd, master_err_fd], [], [], 1)[0]
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r_list:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
if len(lines) == 2:
return lines[1].strip()
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = self.get_groups_set(names_only=True)
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False, names_only=True)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.umask is not None:
cmd.append('-K')
cmd.append('UMASK=' + self.umask)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
password_expire_max=dict(type='int', no_log=False),
password_expire_min=dict(type='int', no_log=False),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
umask=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
(rc, out, err) = user.set_password_expire()
if rc is None:
pass # target state reached, nothing to do
else:
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
else:
result['changed'] = True
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,063 |
/dev/null must be allowed for skeleton in user module
|
### Summary
If you set `skeleton` to `/dev/null` in `user` module you get an error saying that it's not a directory. However it's perfectly fine to do `useradd --skel /dev/null`, so ansible shouldn't be limiting the functionality of the `useradd` command.
`useradd --skel /dev/null` will cause nothing to be added to the new home directoy. It used to work correctly in older versions of ansible and should be fixed in this version.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
2.11.1
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
linux
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
`/dev/null` should be allowed and result in no files being added to new home directory
### Actual Results
```console
It caused an error saying that /dev/null is not a directory
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75063
|
https://github.com/ansible/ansible/pull/75948
|
1ecc62ba0609cd75d798e309c6c8dd14958dd01a
|
25b3d3a6f78616534276d2559f952e5073a3ef60
| 2021-06-20T23:53:18Z |
python
| 2023-06-07T16:10:21Z |
test/integration/targets/user/tasks/test_create_user_home.yml
|
# https://github.com/ansible/ansible/issues/42484
# Skipping macOS for now since there is a bug when changing home directory
- name: Test home directory creation
when: ansible_facts.system != 'Darwin'
block:
- name: create user specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_0
- name: create user again specifying home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_1
- name: change user home
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser-mod"
register: user_test3_2
- name: change user home back
user:
name: ansibulluser
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser"
register: user_test3_3
- name: validate results for testcase 3
assert:
that:
- user_test3_0 is not changed
- user_test3_1 is not changed
- user_test3_2 is changed
- user_test3_3 is changed
# https://github.com/ansible/ansible/issues/41393
# Create a new user account with a path that has parent directories that do not exist
- name: Create user with home path that has parents that do not exist
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_1
- name: Create user with home path that has parents that do not exist again
user:
name: ansibulluser2
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: create_home_with_no_parent_2
- name: Check the created home directory
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
register: home_with_no_parent_3
- name: Ensure user with non-existing parent paths was created successfully
assert:
that:
- create_home_with_no_parent_1 is changed
- create_home_with_no_parent_1.home == user_home_prefix[ansible_facts.system] ~ '/in2deep/ansibulluser2'
- create_home_with_no_parent_2 is not changed
- home_with_no_parent_3.stat.uid == create_home_with_no_parent_1.uid
- home_with_no_parent_3.stat.gr_name == default_user_group[ansible_facts.distribution] | default('ansibulluser2')
- name: Cleanup test account
user:
name: ansibulluser2
home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2"
state: absent
remove: yes
- name: Remove testing dir
file:
path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/"
state: absent
# https://github.com/ansible/ansible/issues/60307
# Make sure we can create a user when the home directory is missing
- name: Create user with home path that does not exist
user:
name: ansibulluser3
state: present
home: "{{ user_home_prefix[ansible_facts.system] }}/nosuchdir"
createhome: no
- name: Cleanup test account
user:
name: ansibulluser3
state: absent
remove: yes
# https://github.com/ansible/ansible/issues/70589
# Create user with create_home: no and parent directory does not exist.
- name: "Check if parent dir for home dir for user exists (before)"
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/thereisnodir"
register: create_user_no_create_home_with_no_parent_parent_dir_before
- name: "Create user with create_home == no and home path parent dir does not exist"
user:
name: randomuser
state: present
create_home: false
home: "{{ user_home_prefix[ansible_facts.system] }}/thereisnodir/randomuser"
register: create_user_no_create_home_with_no_parent
- name: "Check if parent dir for home dir for user exists (after)"
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/thereisnodir"
register: create_user_no_create_home_with_no_parent_parent_dir_after
- name: "Check if home for user is created"
stat:
path: "{{ user_home_prefix[ansible_facts.system] }}/thereisnodir/randomuser"
register: create_user_no_create_home_with_no_parent_home_dir
- name: "Ensure user with non-existing parent paths with create_home: no was created successfully"
assert:
that:
- not create_user_no_create_home_with_no_parent_parent_dir_before.stat.exists
- not create_user_no_create_home_with_no_parent_parent_dir_after.stat.isdir is defined
- not create_user_no_create_home_with_no_parent_home_dir.stat.exists
- name: Cleanup test account
user:
name: randomuser
state: absent
remove: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,567 |
Add links to module index from module page
|
### Summary
Requested by reddit poster:
https://docs.ansible.com/ansible/latest/module_plugin_guide/modules_intro.html
It's been suggested already to create an index, I'd like to see groups of the, something similar to collections, be included on this page.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/module_plugin_guide/modules_intro.rst
### Ansible Version
```console
$ ansible --version
2.16
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80567
|
https://github.com/ansible/ansible/pull/80588
|
ef56284f9d4515c3c3f07308b2226435bffe90e1
|
519caca70cf9408a796be419fd6a67e0bea9ab7c
| 2023-04-19T14:00:36Z |
python
| 2023-06-07T21:05:41Z |
docs/docsite/rst/module_plugin_guide/modules_intro.rst
|
.. _intro_modules:
Introduction to modules
=======================
Modules (also referred to as "task plugins" or "library plugins") are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values. In Ansible 2.10 and later, most modules are hosted in collections.
You can execute modules from the command line.
.. code-block:: shell-session
ansible webservers -m service -a "name=httpd state=started"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"
Each module supports taking arguments. Nearly all modules take ``key=value`` arguments, space delimited. Some modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
From playbooks, Ansible modules are executed in a very similar way.
.. code-block:: yaml
- name: reboot the servers
command: /sbin/reboot -t now
Another way to pass arguments to a module is using YAML syntax, also called 'complex args'.
.. code-block:: yaml
- name: restart webserver
service:
name: httpd
state: restarted
All modules return JSON format data. This means modules can be written in any programming language. Modules should be idempotent, and should avoid making any changes if they detect that the current state matches the desired final state. When used in an Ansible playbook, modules can trigger 'change events' in the form of notifying :ref:`handlers <handlers>` to run additional tasks.
You can access the documentation for each module from the command line with the ansible-doc tool.
.. code-block:: shell-session
ansible-doc yum
For a list of all available modules, see the :ref:`Collection docs <list_of_collections>`, or run the following at a command prompt.
.. code-block:: shell-session
ansible-doc -l
.. seealso::
:ref:`intro_adhoc`
Examples of using modules in /usr/bin/ansible
:ref:`working_with_playbooks`
Examples of using modules with /usr/bin/ansible-playbook
:ref:`developing_modules`
How to write your own modules
:ref:`developing_api`
Examples of using modules with the Python API
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,892 |
Add checksum check for apt_key example for improved security
|
### Summary
Based on feedback from a mastodon ansible user:
The first example of https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_key_module.html#examples (the apt_key replacement) shows how a pgp key is downloaded and declared as trusted for a repository, but there is no validation of the key going on. Maybe the get_url task could include the checksum argument, to show that (and how) the key should be validated against a known good. Further down there is also an example of apt_key downloading a key without the id argument specified.
### Issue Type
Documentation Report
### Component Name
apt_key
### Ansible Version
```console
$ ansible --version
2.15
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
The examples all use https URLs, so they do not expose you to just arbitrary MITM; but many projects use untrusted public mirrors to distribute packages and public keys, and instead post the fingerprint of the public key on the non-mirrored first-party website. You are then supposed to verify the downloaded key against that fingerprint (unfortunately a manual action since they post a PGP fingerprint and not a checksum), in order to protect against rogue mirror operators (AFAIK).
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79892
|
https://github.com/ansible/ansible/pull/81017
|
91177623581490af0455307f6c8e26312b04b4a0
|
ce55e0faf5911d267fc7c649ff3d6304657ccb25
| 2023-02-02T20:39:57Z |
python
| 2023-06-13T15:17:42Z |
lib/ansible/modules/apt_key.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2012, Jayson Vantuyl <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_key
author:
- Jayson Vantuyl (@jvantuyl)
version_added: "1.0"
short_description: Add or remove an apt key
description:
- Add or remove an I(apt) key, optionally downloading it.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: debian
notes:
- The apt-key command used by this module has been deprecated. See the L(Debian wiki,https://wiki.debian.org/DebianRepository/UseThirdParty) for details.
This module is kept for backwards compatibility for systems that still use apt-key as the main way to manage apt repository keys.
- As a sanity check, downloaded key id must match the one specified.
- "Use full fingerprint (40 characters) key ids to avoid key collisions.
To generate a full-fingerprint imported key: C(apt-key adv --list-public-keys --with-fingerprint --with-colons)."
- If you specify both the key id and the URL with C(state=present), the task can verify or add the key as needed.
- Adding a new key requires an apt cache update (e.g. using the M(ansible.builtin.apt) module's update_cache option).
requirements:
- gpg
seealso:
- module: ansible.builtin.deb822_repository
options:
id:
description:
- The identifier of the key.
- Including this allows check mode to correctly report the changed state.
- If specifying a subkey's id be aware that apt-key does not understand how to remove keys via a subkey id. Specify the primary key's id instead.
- This parameter is required when C(state) is set to C(absent).
type: str
data:
description:
- The keyfile contents to add to the keyring.
type: str
file:
description:
- The path to a keyfile on the remote server to add to the keyring.
type: path
keyring:
description:
- The full path to specific keyring file in C(/etc/apt/trusted.gpg.d/).
type: path
version_added: "1.3"
url:
description:
- The URL to retrieve key from.
type: str
keyserver:
description:
- The keyserver to retrieve key from.
type: str
version_added: "1.6"
state:
description:
- Ensures that the key is present (added) or absent (revoked).
type: str
choices: [ absent, present ]
default: present
validate_certs:
description:
- If C(false), SSL certificates for the target url will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
'''
EXAMPLES = '''
- name: One way to avoid apt_key once it is removed from your distro, armored keys should use .asc extension, binary should use .gpg
block:
- name: somerepo | no apt key
ansible.builtin.get_url:
url: https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x36a1d7869245c8950f966e92d8576a8ba88d21e9
dest: /etc/apt/keyrings/myrepo.asc
- name: somerepo | apt source
ansible.builtin.apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
- name: Add an apt key by id from a keyserver
ansible.builtin.apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: Add an Apt signing key, uses whichever key is at the URL
ansible.builtin.apt_key:
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Add an Apt signing key, will not download if present
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Remove a Apt specific signing key, leading 0x is valid
ansible.builtin.apt_key:
id: 0x9FED2BCBDCD29CDF762678CBAED4B06F473041FA
state: absent
# Use armored file since utf-8 string is expected. Must be of "PGP PUBLIC KEY BLOCK" type.
- name: Add a key from a file on the Ansible server
ansible.builtin.apt_key:
data: "{{ lookup('ansible.builtin.file', 'apt.asc') }}"
state: present
- name: Add an Apt signing key to a specific keyring file
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
keyring: /etc/apt/trusted.gpg.d/debian.gpg
- name: Add Apt signing key on remote server to keyring
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
file: /tmp/apt.gpg
state: present
'''
RETURN = '''
after:
description: List of apt key ids or fingerprints after any modification
returned: on change
type: list
sample: ["D8576A8BA88D21E9", "3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
before:
description: List of apt key ids or fingprints before any modifications
returned: always
type: list
sample: ["3B4FE6ACC0B21F32", "D94AA3F0EFE21092", "871920D1991BC93C"]
fp:
description: Fingerprint of the key to import
returned: always
type: str
sample: "D8576A8BA88D21E9"
id:
description: key id from source
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
key_id:
description: calculated key id, it should be same as 'id', but can be different
returned: always
type: str
sample: "36A1D7869245C8950F966E92D8576A8BA88D21E9"
short_id:
description: calculated short key id
returned: always
type: str
sample: "A88D21E9"
'''
import os
# FIXME: standardize into module_common
from traceback import format_exc
from ansible.module_utils.common.text.converters import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_url
apt_key_bin = None
gpg_bin = None
locale = None
def lang_env(module):
if not hasattr(lang_env, 'result'):
locale = get_best_parsable_locale(module)
lang_env.result = dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale)
return lang_env.result
def find_needed_binaries(module):
global apt_key_bin
global gpg_bin
apt_key_bin = module.get_bin_path('apt-key', required=True)
gpg_bin = module.get_bin_path('gpg', required=True)
def add_http_proxy(cmd):
for envvar in ('HTTPS_PROXY', 'https_proxy', 'HTTP_PROXY', 'http_proxy'):
proxy = os.environ.get(envvar)
if proxy:
break
if proxy:
cmd += ' --keyserver-options http-proxy=%s' % proxy
return cmd
def parse_key_id(key_id):
"""validate the key_id and break it into segments
:arg key_id: The key_id as supplied by the user. A valid key_id will be
8, 16, or more hexadecimal chars with an optional leading ``0x``.
:returns: The portion of key_id suitable for apt-key del, the portion
suitable for comparisons with --list-public-keys, and the portion that
can be used with --recv-key. If key_id is long enough, these will be
the last 8 characters of key_id, the last 16 characters, and all of
key_id. If key_id is not long enough, some of the values will be the
same.
* apt-key del <= 1.10 has a bug with key_id != 8 chars
* apt-key adv --list-public-keys prints 16 chars
* apt-key adv --recv-key can take more chars
"""
# Make sure the key_id is valid hexadecimal
int(to_native(key_id), 16)
key_id = key_id.upper()
if key_id.startswith('0X'):
key_id = key_id[2:]
key_id_len = len(key_id)
if (key_id_len != 8 and key_id_len != 16) and key_id_len <= 16:
raise ValueError('key_id must be 8, 16, or 16+ hexadecimal characters in length')
short_key_id = key_id[-8:]
fingerprint = key_id
if key_id_len > 16:
fingerprint = key_id[-16:]
return short_key_id, fingerprint, key_id
def parse_output_for_keys(output, short_format=False):
found = []
lines = to_native(output).split('\n')
for line in lines:
if (line.startswith("pub") or line.startswith("sub")) and "expired" not in line:
try:
# apt key format
tokens = line.split()
code = tokens[1]
(len_type, real_code) = code.split("/")
except (IndexError, ValueError):
# gpg format
try:
tokens = line.split(':')
real_code = tokens[4]
except (IndexError, ValueError):
# invalid line, skip
continue
found.append(real_code)
if found and short_format:
found = shorten_key_ids(found)
return found
def all_keys(module, keyring, short_format):
if keyring is not None:
cmd = "%s --keyring %s adv --list-public-keys --keyid-format=long" % (apt_key_bin, keyring)
else:
cmd = "%s adv --list-public-keys --keyid-format=long" % apt_key_bin
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(msg="Unable to list public keys", cmd=cmd, rc=rc, stdout=out, stderr=err)
return parse_output_for_keys(out, short_format)
def shorten_key_ids(key_id_list):
"""
Takes a list of key ids, and converts them to the 'short' format,
by reducing them to their last 8 characters.
"""
short = []
for key in key_id_list:
short.append(key[-8:])
return short
def download_key(module, url):
try:
# note: validate_certs and other args are pulled from module directly
rsp, info = fetch_url(module, url, use_proxy=True)
if info['status'] != 200:
module.fail_json(msg="Failed to download key at %s: %s" % (url, info['msg']))
return rsp.read()
except Exception:
module.fail_json(msg="error getting key id from url: %s" % url, traceback=format_exc())
def get_key_id_from_file(module, filename, data=None):
native_data = to_native(data)
is_armored = native_data.find("-----BEGIN PGP PUBLIC KEY BLOCK-----") >= 0
key = None
cmd = [gpg_bin, '--with-colons', filename]
(rc, out, err) = module.run_command(cmd, environ_update=lang_env(module), data=(native_data if is_armored else data), binary_data=not is_armored)
if rc != 0:
module.fail_json(msg="Unable to extract key from '%s'" % ('inline data' if data is not None else filename), stdout=out, stderr=err)
keys = parse_output_for_keys(out)
# assume we only want first key?
if keys:
key = keys[0]
return key
def get_key_id_from_data(module, data):
return get_key_id_from_file(module, '-', data)
def import_key(module, keyring, keyserver, key_id):
if keyring:
cmd = "%s --keyring %s adv --no-tty --keyserver %s" % (apt_key_bin, keyring, keyserver)
else:
cmd = "%s adv --no-tty --keyserver %s" % (apt_key_bin, keyserver)
# check for proxy
cmd = add_http_proxy(cmd)
# add recv argument as last one
cmd = "%s --recv %s" % (cmd, key_id)
for retry in range(5):
(rc, out, err) = module.run_command(cmd, environ_update=lang_env(module))
if rc == 0:
break
else:
# Out of retries
if rc == 2 and 'not found on keyserver' in out:
msg = 'Key %s not found on keyserver %s' % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env(module))
else:
msg = "Error fetching key %s from keyserver: %s" % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, forced_environment=lang_env(module), rc=rc, stdout=out, stderr=err)
return True
def add_key(module, keyfile, keyring, data=None):
if data is not None:
if keyring:
cmd = "%s --keyring %s add -" % (apt_key_bin, keyring)
else:
cmd = "%s add -" % apt_key_bin
(rc, out, err) = module.run_command(cmd, data=data, binary_data=True)
if rc != 0:
module.fail_json(
msg="Unable to add a key from binary data",
cmd=cmd,
rc=rc,
stdout=out,
stderr=err,
)
else:
if keyring:
cmd = "%s --keyring %s add %s" % (apt_key_bin, keyring, keyfile)
else:
cmd = "%s add %s" % (apt_key_bin, keyfile)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to add a key from file %s" % (keyfile),
cmd=cmd,
rc=rc,
keyfile=keyfile,
stdout=out,
stderr=err,
)
return True
def remove_key(module, key_id, keyring):
if keyring:
cmd = '%s --keyring %s del %s' % (apt_key_bin, keyring, key_id)
else:
cmd = '%s del %s' % (apt_key_bin, key_id)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg="Unable to remove a key with id %s" % (key_id),
cmd=cmd,
rc=rc,
key_id=key_id,
stdout=out,
stderr=err,
)
return True
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str'),
url=dict(type='str'),
data=dict(type='str'),
file=dict(type='path'),
keyring=dict(type='path'),
validate_certs=dict(type='bool', default=True),
keyserver=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
),
supports_check_mode=True,
mutually_exclusive=(('data', 'file', 'keyserver', 'url'),),
)
# parameters
key_id = module.params['id']
url = module.params['url']
data = module.params['data']
filename = module.params['file']
keyring = module.params['keyring']
state = module.params['state']
keyserver = module.params['keyserver']
# internal vars
short_format = False
short_key_id = None
fingerprint = None
error_no_error = "apt-key did not return an error, but %s (check that the id is correct and *not* a subkey)"
# ensure we have requirements met
find_needed_binaries(module)
# initialize result dict
r = {'changed': False}
if not key_id:
if keyserver:
module.fail_json(msg="Missing key_id, required with keyserver.")
if url:
data = download_key(module, url)
if filename:
key_id = get_key_id_from_file(module, filename)
elif data:
key_id = get_key_id_from_data(module, data)
r['id'] = key_id
try:
short_key_id, fingerprint, key_id = parse_key_id(key_id)
r['short_id'] = short_key_id
r['fp'] = fingerprint
r['key_id'] = key_id
except ValueError:
module.fail_json(msg='Invalid key_id', **r)
if not fingerprint:
# invalid key should fail well before this point, but JIC ...
module.fail_json(msg="Unable to continue as we could not extract a valid fingerprint to compare against existing keys.", **r)
if len(key_id) == 8:
short_format = True
# get existing keys to verify if we need to change
r['before'] = keys = all_keys(module, keyring, short_format)
keys2 = []
if state == 'present':
if (short_format and short_key_id not in keys) or (not short_format and fingerprint not in keys):
r['changed'] = True
if not module.check_mode:
if filename:
add_key(module, filename, keyring)
elif keyserver:
import_key(module, keyring, keyserver, key_id)
elif data:
# this also takes care of url if key_id was not provided
add_key(module, "-", keyring, data)
elif url:
# we hit this branch only if key_id is supplied with url
data = download_key(module, url)
add_key(module, "-", keyring, data)
else:
module.fail_json(msg="No key to add ... how did i get here?!?!", **r)
# verify it got added
r['after'] = keys2 = all_keys(module, keyring, short_format)
if (short_format and short_key_id not in keys2) or (not short_format and fingerprint not in keys2):
module.fail_json(msg=error_no_error % 'failed to add the key', **r)
elif state == 'absent':
if not key_id:
module.fail_json(msg="key is required to remove a key", **r)
if fingerprint in keys:
r['changed'] = True
if not module.check_mode:
# we use the "short" id: key_id[-8:], short_format=True
# it's a workaround for https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1481871
if short_key_id is not None and remove_key(module, short_key_id, keyring):
r['after'] = keys2 = all_keys(module, keyring, short_format)
if fingerprint in keys2:
module.fail_json(msg=error_no_error % 'the key was not removed', **r)
else:
module.fail_json(msg="error removing key_id", **r)
module.exit_json(**r)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,880 |
Problem with handlers notifying another handlers
|
### Summary
The ansible-core 2.15 ignores handlers notified by another handlers when are more than one case.
With the ansible-core 2.14, all cases of handlers notified by another handlers are runned.
### Issue Type
Bug Report
### Component Name
notify
### Ansible Version
```console
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
```
### OS / Environment
Archlinux
ansible 7.6.0-1
ansible-core 2.15.0-1
ansible-lint 6.15.0.r45.g2fca3fe-2
python-ansible-compat 4.0.2-1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Testing
hosts: localhost
tasks:
- name: Trigger handlers
debug:
msg: Task 1
changed_when: true
notify: Handler 1
handlers:
- name: Handler 1
debug:
msg: Handler 1
changed_when: true
notify: Handler 2
- name: Handler 2
debug:
msg: Handler 2
changed_when: true
notify: Handler 3
- name: Handler 3
debug:
msg: Handler 3
```
### Expected Results
All handlers must be notified.
```console
PLAY [Testing] ********************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] ********************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
RUNNING HANDLER [Handler 3] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 3"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
Only the two first handlers were notified.
```console
PLAY [Testing] *************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] *************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
PLAY RECAP *************************************************************************************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80880
|
https://github.com/ansible/ansible/pull/80898
|
73e04ef2d6103bad2519b55f04a9c2865b8c93fe
|
660f1726c814e9d7502cdb7ba046ee8ad9014e63
| 2023-05-24T19:18:49Z |
python
| 2023-06-14T15:39:20Z |
changelogs/fragments/80880-register-handlers-immediately-if-iterating-handlers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,880 |
Problem with handlers notifying another handlers
|
### Summary
The ansible-core 2.15 ignores handlers notified by another handlers when are more than one case.
With the ansible-core 2.14, all cases of handlers notified by another handlers are runned.
### Issue Type
Bug Report
### Component Name
notify
### Ansible Version
```console
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
```
### OS / Environment
Archlinux
ansible 7.6.0-1
ansible-core 2.15.0-1
ansible-lint 6.15.0.r45.g2fca3fe-2
python-ansible-compat 4.0.2-1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Testing
hosts: localhost
tasks:
- name: Trigger handlers
debug:
msg: Task 1
changed_when: true
notify: Handler 1
handlers:
- name: Handler 1
debug:
msg: Handler 1
changed_when: true
notify: Handler 2
- name: Handler 2
debug:
msg: Handler 2
changed_when: true
notify: Handler 3
- name: Handler 3
debug:
msg: Handler 3
```
### Expected Results
All handlers must be notified.
```console
PLAY [Testing] ********************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] ********************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
RUNNING HANDLER [Handler 3] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 3"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
Only the two first handlers were notified.
```console
PLAY [Testing] *************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] *************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
PLAY RECAP *************************************************************************************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80880
|
https://github.com/ansible/ansible/pull/80898
|
73e04ef2d6103bad2519b55f04a9c2865b8c93fe
|
660f1726c814e9d7502cdb7ba046ee8ad9014e63
| 2023-05-24T19:18:49Z |
python
| 2023-06-14T15:39:20Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import queue
import sys
import threading
import time
import typing as t
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleUndefinedVariable, AnsibleParserError
from ansible.executor import action_write_locks
from ansible.executor.play_iterator import IteratingStates, PlayIterator
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.executor.task_queue_manager import CallbackSend, DisplaySend, PromptSend
from ansible.module_utils.six import string_types
from ansible.module_utils.common.text.converters import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.conditional import Conditional
from ansible.playbook.handler import Handler
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.task import Task
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.fqcn import add_internal_fqcns
from ansible.utils.multiprocessing import context as multiprocessing_context
from ansible.utils.unsafe_proxy import wrap_var
from ansible.utils.vars import combine_vars, isidentifier
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
# This list can be an exact match, or start of string bound
# does not accept regex
ALWAYS_DELEGATE_FACT_PREFIXES = frozenset((
'discovered_interpreter_',
))
class StrategySentinel:
pass
_sentinel = StrategySentinel()
def post_process_whens(result, task, templar, task_vars):
cond = None
if task.changed_when:
with templar.set_temporary_context(available_variables=task_vars):
cond = Conditional(loader=templar._loader)
cond.when = task.changed_when
result['changed'] = cond.evaluate_conditional(templar, templar.available_variables)
if task.failed_when:
with templar.set_temporary_context(available_variables=task_vars):
if cond is None:
cond = Conditional(loader=templar._loader)
cond.when = task.failed_when
failed_when_result = cond.evaluate_conditional(templar, templar.available_variables)
result['failed_when_result'] = result['failed'] = failed_when_result
def _get_item_vars(result, task):
item_vars = {}
if task.loop or task.loop_with:
loop_var = result.get('ansible_loop_var', 'item')
index_var = result.get('ansible_index_var')
if loop_var in result:
item_vars[loop_var] = result[loop_var]
if index_var and index_var in result:
item_vars[index_var] = result[index_var]
if '_ansible_item_label' in result:
item_vars['_ansible_item_label'] = result['_ansible_item_label']
if 'ansible_loop' in result:
item_vars['ansible_loop'] = result['ansible_loop']
return item_vars
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
elif isinstance(result, DisplaySend):
display.display(*result.args, **result.kwargs)
elif isinstance(result, CallbackSend):
for arg in result.args:
if isinstance(arg, TaskResult):
strategy.normalize_task_result(arg)
break
strategy._tqm.send_callback(result.method_name, *result.args, **result.kwargs)
elif isinstance(result, TaskResult):
strategy.normalize_task_result(result)
with strategy._results_lock:
strategy._results.append(result)
elif isinstance(result, PromptSend):
try:
value = display.prompt_until(
result.prompt,
private=result.private,
seconds=result.seconds,
complete_input=result.complete_input,
interrupt_input=result.interrupt_input,
)
except AnsibleError as e:
value = e
except BaseException as e:
# relay unexpected errors so bugs in display are reported and don't cause workers to hang
try:
raise AnsibleError(f"{e}") from e
except AnsibleError as e:
value = e
strategy._workers[result.worker_id].worker_queue.put(value)
else:
display.warning('Received an invalid object (%s) in the result queue: %r' % (type(result), result))
except (IOError, EOFError):
break
except queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator.host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
if task.run_once and iterator._play.strategy in add_internal_fqcns(('linear',)) and result.is_failed():
for host_name, state in prev_host_states.items():
if host_name == host.name:
continue
iterator.set_state_for_host(host_name, state)
iterator._play._removed_hosts.remove(host_name)
iterator.set_state_for_host(host.name, prev_host_state)
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
self._worker_queues = dict()
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if not play.finalized and Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in self._active_connections.values():
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be IteratingStates.COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(self._tqm._unreachable_hosts.keys()) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(iterator.get_failed_hosts()) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by two
# functions: linear.py::run(), and
# free.py::run() so we'd have to add to both to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
queued = False
starting_worker = self._cur_worker
while True:
if self._cur_worker >= rewind_point:
self._cur_worker = 0
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
# Pass WorkerProcess its strategy worker number so it can send an identifier along with intra-task requests
worker_prc = WorkerProcess(
self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader, self._cur_worker,
)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
def _set_always_delegated_facts(self, result, task):
"""Sets host facts for ``delegate_to`` hosts for facts that should
always be delegated
This operation mutates ``result`` to remove the always delegated facts
See ``ALWAYS_DELEGATE_FACT_PREFIXES``
"""
if task.delegate_to is None:
return
facts = result['ansible_facts']
always_keys = set()
_add = always_keys.add
for fact_key in facts:
for always_key in ALWAYS_DELEGATE_FACT_PREFIXES:
if fact_key.startswith(always_key):
_add(fact_key)
if always_keys:
_pop = facts.pop
always_facts = {
'ansible_facts': dict((k, _pop(k)) for k in list(facts) if k in always_keys)
}
host_list = self.get_delegated_hosts(result, task)
_set_host_facts = self._variable_manager.set_host_facts
for target_host in host_list:
_set_host_facts(target_host, always_facts)
def normalize_task_result(self, task_result):
"""Normalize a TaskResult to reference actual Host and Task objects
when only given the ``Host.name``, or the ``Task._uuid``
Only the ``Host.name`` and ``Task._uuid`` are commonly sent back from
the ``TaskExecutor`` or ``WorkerProcess`` due to performance concerns
Mutates the original object
"""
if isinstance(task_result._host, string_types):
# If the value is a string, it is ``Host.name``
task_result._host = self._inventory.get_host(to_text(task_result._host))
if isinstance(task_result._task, string_types):
# If the value is a string, it is ``Task._uuid``
queue_cache_entry = (task_result._host.name, task_result._task)
try:
found_task = self._queued_task_cache[queue_cache_entry]['task']
except KeyError:
# This should only happen due to an implicit task created by the
# TaskExecutor, restrict this behavior to the explicit use case
# of an implicit async_status task
if task_result._task_fields.get('action') != 'async_status':
raise
original_task = Task()
else:
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._task = original_task
return task_result
def search_handlers_by_notification(self, notification: str, iterator: PlayIterator) -> t.Generator[Handler, None, None]:
templar = Templar(None)
# iterate in reversed order since last handler loaded with the same name wins
for handler in (h for b in reversed(iterator._play.handlers) for h in b.block if h.name):
if not handler.cached_name:
if templar.is_template(handler.name):
templar.available_variables = self._variable_manager.get_vars(
play=iterator._play,
task=handler,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all
)
try:
handler.name = templar.template(handler.name)
except (UndefinedError, AnsibleUndefinedVariable) as e:
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
if not handler.listen:
display.warning(
"Handler '%s' is unusable because it has no listen topics and "
"the name could not be templated (host-specific variables are "
"not supported in handler names). The error: %s" % (handler.name, to_text(e))
)
continue
handler.cached_name = True
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
if notification in {
handler.name,
handler.get_name(include_role_fqcn=False),
handler.get_name(include_role_fqcn=True),
}:
yield handler
break
templar.available_variables = {}
for handler in (h for b in iterator._play.handlers for h in b.block):
if listeners := handler.listen:
if notification in handler.get_validated_value(
'listen',
handler.fattributes.get('listen'),
listeners,
templar,
):
yield handler
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
original_host = task_result._host
original_task = task_result._task
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
# save the current state before failing it for later inspection
state_when_failed = iterator.get_state_for_host(original_host.name)
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
iterator.mark_host_failed(h)
else:
iterator.mark_host_failed(original_host)
state, dummy = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == IteratingStates.COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
# if we're iterating on the rescue portion of a block then
# we save the failed task in a special var for use
# within the rescue/always
if iterator.is_any_block_rescuing(state_when_failed):
self._tqm._stats.increment('rescued', original_host.name)
iterator._play._removed_hosts.remove(original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=wrap_var(original_task.serialize()),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
self._tqm._stats.increment('dark', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item and task_result.is_changed():
# only ensure that notified handlers exist, if so save the notifications for when
# handlers are actually flushed so the last defined handlers are exexcuted,
# otherwise depending on the setting either error or warn
for notification in result_item['_ansible_notify']:
if any(self.search_handlers_by_notification(notification, iterator)):
iterator.add_notification(original_host.name, notification)
display.vv(f"Notification for handler {notification} has been saved.")
continue
msg = (
f"The requested handler '{notification}' was not found in either the main handlers"
" list nor in the listening handlers list"
)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._inventory.add_dynamic_host(new_host_info, result_item)
# ensure host is available for subsequent plays
if result_item.get('changed') and new_host_info['host_name'] not in self._hosts_cache_all:
self._hosts_cache_all.append(new_host_info['host_name'])
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._inventory.add_dynamic_group(original_host, result_item)
if 'add_host' in result_item or 'add_group' in result_item:
item_vars = _get_item_vars(result_item, original_task)
found_task_vars = self._queued_task_cache.get((original_host.name, task_result._task._uuid))['task_vars']
if item_vars:
all_task_vars = combine_vars(found_task_vars, item_vars)
else:
all_task_vars = found_task_vars
all_task_vars[original_task.register] = wrap_var(result_item)
post_process_whens(result_item, original_task, handler_templar, all_task_vars)
if original_task.loop or original_task.loop_with:
new_item_result = TaskResult(
task_result._host,
task_result._task,
result_item,
task_result._task_fields,
)
self._tqm.send_callback('v2_runner_item_on_ok', new_item_result)
if result_item.get('changed', False):
task_result._result['changed'] = True
if result_item.get('failed', False):
task_result._result['failed'] = True
if 'ansible_facts' in result_item and original_task.action not in C._ACTION_DEBUG:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
# Set facts that should always be on the delegated hosts
self._set_always_delegated_facts(result_item, original_task)
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action in C._ACTION_INCLUDE_VARS:
for (var_name, var_value) in result_item['ansible_facts'].items():
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
is_set_fact = original_task.action in C._ACTION_SET_FACT
if not is_set_fact or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if is_set_fact:
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
# register final results
if original_task.register:
if not isidentifier(original_task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % original_task.register)
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action not in C._ACTION_INCLUDE_ROLE:?
# lookup the role in the role cache to make sure we're dealing
# with the correct object and mark it as executed
role_obj = self._get_cached_role(original_task, iterator._play)
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if isinstance(original_task, Handler):
for handler in (h for b in iterator._play.handlers for h in b.block if h._uuid == original_task._uuid):
handler.remove_host(original_host)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars | included_file._vars
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
Raises AnsibleError exception in case of a failure during including a file,
in such case the caller is responsible for marking the host(s) as failed
using PlayIterator.mark_host_failed().
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleParserError:
raise
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
for r in included_file._results:
r._result['failed'] = True
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
raise AnsibleError(reason) from e
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = meta_action
skip_reason = '%s conditional evaluated to False' % meta_action
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
# These don't support "when" conditionals
if meta_action in ('noop', 'refresh_inventory', 'reset_connection') and task.when:
self._cond_not_supported_warn(meta_action)
if meta_action == 'noop':
msg = "noop"
elif meta_action == 'flush_handlers':
if _evaluate_conditional(target_host):
host_state = iterator.get_state_for_host(target_host.name)
# actually notify proper handlers based on all notifications up to this point
for notification in list(host_state.handler_notifications):
for handler in self.search_handlers_by_notification(notification, iterator):
if handler.notify_host(target_host):
# NOTE even with notifications deduplicated this can still happen in case of handlers being
# notified multiple times using different names, like role name or fqcn
self._tqm.send_callback('v2_playbook_on_notify', handler, target_host)
iterator.clear_notification(target_host.name, notification)
if host_state.run_state == IteratingStates.HANDLERS:
raise AnsibleError('flush_handlers cannot be used as a handler')
if target_host.name not in self._tqm._unreachable_hosts:
host_state.pre_flushing_run_state = host_state.run_state
host_state.run_state = IteratingStates.HANDLERS
msg = "triggered running handlers for %s" % target_host.name
else:
skipped = True
skip_reason += ', not running handlers for %s' % target_host.name
elif meta_action == 'refresh_inventory':
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
skip_reason += ', not clearing facts and fact cache for %s' % target_host.name
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator.clear_host_errors(host)
msg = "cleared host errors"
else:
skipped = True
skip_reason += ', not clearing host error state for %s' % target_host.name
elif meta_action == 'end_batch':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
msg = "ending batch"
else:
skipped = True
skip_reason += ', continuing current batch'
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator.set_run_state_for_host(host.name, IteratingStates.COMPLETE)
# end_play is used in PlaybookExecutor/TQM to indicate that
# the whole play is supposed to be ended as opposed to just a batch
iterator.end_play = True
msg = "ending play"
else:
skipped = True
skip_reason += ', continuing play'
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator.set_run_state_for_host(target_host.name, IteratingStates.COMPLETE)
iterator._play._removed_hosts.append(target_host.name)
msg = "ending play for %s" % target_host.name
else:
skipped = True
skip_reason += ", continuing execution for %s" % target_host.name
# TODO: Nix msg here? Left for historical reasons, but skip_reason exists now.
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'role_complete':
# Allow users to use this in a play as reported in https://github.com/ansible/ansible/issues/22286?
# How would this work with allow_duplicates??
if task.implicit:
role_obj = self._get_cached_role(task, iterator._play)
role_obj._completed[target_host.name] = True
msg = 'role_complete for %s' % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. This 'mostly' works here cause meta
# disregards the loop, but should not really use play_context at all
play_context.update_vars(all_vars)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
connection.set_options(task_keys=task.dump_attrs(), var_options=all_vars)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
result['skip_reason'] = skip_reason
else:
result['changed'] = False
if not task.implicit:
header = skip_reason if skipped else msg
display.vv(f"META: {header}")
if isinstance(task, Handler):
task.remove_host(target_host)
res = TaskResult(target_host, task, result)
if skipped:
self._tqm.send_callback('v2_runner_on_skipped', res)
return [res]
def _get_cached_role(self, task, play):
role_path = task._role.get_role_path()
role_cache = play.role_cache[role_path]
try:
idx = role_cache.index(task._role)
return role_cache[idx]
except ValueError:
raise AnsibleError(f'Cannot locate {task._role.get_name()} in role cache')
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,880 |
Problem with handlers notifying another handlers
|
### Summary
The ansible-core 2.15 ignores handlers notified by another handlers when are more than one case.
With the ansible-core 2.14, all cases of handlers notified by another handlers are runned.
### Issue Type
Bug Report
### Component Name
notify
### Ansible Version
```console
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
```
### OS / Environment
Archlinux
ansible 7.6.0-1
ansible-core 2.15.0-1
ansible-lint 6.15.0.r45.g2fca3fe-2
python-ansible-compat 4.0.2-1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Testing
hosts: localhost
tasks:
- name: Trigger handlers
debug:
msg: Task 1
changed_when: true
notify: Handler 1
handlers:
- name: Handler 1
debug:
msg: Handler 1
changed_when: true
notify: Handler 2
- name: Handler 2
debug:
msg: Handler 2
changed_when: true
notify: Handler 3
- name: Handler 3
debug:
msg: Handler 3
```
### Expected Results
All handlers must be notified.
```console
PLAY [Testing] ********************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] ********************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
RUNNING HANDLER [Handler 3] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 3"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
Only the two first handlers were notified.
```console
PLAY [Testing] *************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] *************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
PLAY RECAP *************************************************************************************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80880
|
https://github.com/ansible/ansible/pull/80898
|
73e04ef2d6103bad2519b55f04a9c2865b8c93fe
|
660f1726c814e9d7502cdb7ba046ee8ad9014e63
| 2023-05-24T19:18:49Z |
python
| 2023-06-14T15:39:20Z |
test/integration/targets/handlers/80880.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,880 |
Problem with handlers notifying another handlers
|
### Summary
The ansible-core 2.15 ignores handlers notified by another handlers when are more than one case.
With the ansible-core 2.14, all cases of handlers notified by another handlers are runned.
### Issue Type
Bug Report
### Component Name
notify
### Ansible Version
```console
ansible [core 2.15.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.3 (main, Apr 5 2023, 15:52:25) [GCC 12.2.1 20230201] (/usr/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONFIG_FILE() = /etc/ansible/ansible.cfg
EDITOR(env: EDITOR) = vim
PAGER(env: PAGER) = less
```
### OS / Environment
Archlinux
ansible 7.6.0-1
ansible-core 2.15.0-1
ansible-lint 6.15.0.r45.g2fca3fe-2
python-ansible-compat 4.0.2-1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Testing
hosts: localhost
tasks:
- name: Trigger handlers
debug:
msg: Task 1
changed_when: true
notify: Handler 1
handlers:
- name: Handler 1
debug:
msg: Handler 1
changed_when: true
notify: Handler 2
- name: Handler 2
debug:
msg: Handler 2
changed_when: true
notify: Handler 3
- name: Handler 3
debug:
msg: Handler 3
```
### Expected Results
All handlers must be notified.
```console
PLAY [Testing] ********************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] ********************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
RUNNING HANDLER [Handler 3] ********************************************************************************************************
changed: [localhost] => {
"msg": "Handler 3"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
Only the two first handlers were notified.
```console
PLAY [Testing] *************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
ok: [localhost]
TASK [Trigger handlers] *************************************************************************************************
changed: [localhost] => {
"msg": "Task 1"
}
RUNNING HANDLER [Handler 1] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 1"
}
RUNNING HANDLER [Handler 2] *************************************************************************************************
changed: [localhost] => {
"msg": "Handler 2"
}
PLAY RECAP *************************************************************************************************
localhost : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/80880
|
https://github.com/ansible/ansible/pull/80898
|
73e04ef2d6103bad2519b55f04a9c2865b8c93fe
|
660f1726c814e9d7502cdb7ba046ee8ad9014e63
| 2023-05-24T19:18:49Z |
python
| 2023-06-14T15:39:20Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
ansible-playbook test_fqcn_meta_flush_handlers.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "handler ran"
grep out.txt -e "after flush"
ansible-playbook 79776.yml -i inventory.handlers "$@"
ansible-playbook test_block_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-include.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_block_as_handler-import.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using a block as a handler is not supported."
ansible-playbook test_include_role_handler_once.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,413 |
Add RHEL 8.8 to ansible-test
|
### Summary
RHEL 8.8 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80413
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:42Z |
python
| 2023-06-15T01:16:28Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:4.0.1
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.7 py36
test: rhel/[email protected]
- name: RHEL 8.7 py39
test: rhel/[email protected]
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,413 |
Add RHEL 8.8 to ansible-test
|
### Summary
RHEL 8.8 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80413
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:42Z |
python
| 2023-06-15T01:16:28Z |
changelogs/fragments/ansible-test-rhel-9.2-python-3.11.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,413 |
Add RHEL 8.8 to ansible-test
|
### Summary
RHEL 8.8 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80413
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:42Z |
python
| 2023-06-15T01:16:28Z |
test/integration/targets/setup_rpm_repo/tasks/main.yml
|
- block:
- name: Install epel repo which is missing on rhel-7 and is needed for rpmfluff
include_role:
name: setup_epel
when:
- ansible_distribution in ['RedHat', 'CentOS']
- ansible_distribution_major_version is version('7', '==')
- name: Include distribution specific variables
include_vars: "{{ lookup('first_found', params) }}"
vars:
params:
files:
- "{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_version }}.yml"
- "{{ ansible_facts.os_family }}-{{ ansible_facts.distribution_major_version }}.yml"
- "{{ ansible_facts.distribution }}.yml"
- "{{ ansible_facts.os_family }}.yml"
- default.yml
paths:
- "{{ role_path }}/vars"
- name: Install rpmfluff and deps
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: "{{ rpm_repo_packages }}"
- name: Install rpmfluff via pip
pip:
name: rpmfluff
when: ansible_facts.os_family == 'RedHat' and ansible_distribution_major_version is version('9', '==')
- set_fact:
repos:
- "fake-{{ ansible_architecture }}"
- "fake-i686"
- "fake-ppc64"
changed_when: yes
notify: remove repos
- name: Create RPMs and put them into a repo
create_repo:
arch: "{{ ansible_architecture }}"
tempdir: "{{ remote_tmp_dir }}"
register: repo
- set_fact:
repodir: "{{ repo.repo_dir }}"
- name: Install the repo
yum_repository:
name: "fake-{{ ansible_architecture }}"
description: "fake-{{ ansible_architecture }}"
baseurl: "file://{{ repodir }}"
gpgcheck: no
when: install_repos | bool
- name: Copy comps.xml file
copy:
src: comps.xml
dest: "{{ repodir }}"
register: repodir_comps
- name: Register comps.xml on repo
command: createrepo -g {{ repodir_comps.dest | quote }} {{ repodir | quote }}
- name: Create RPMs and put them into a repo (i686)
create_repo:
arch: i686
tempdir: "{{ remote_tmp_dir }}"
register: repo_i686
- set_fact:
repodir_i686: "{{ repo_i686.repo_dir }}"
- name: Install the repo (i686)
yum_repository:
name: "fake-i686"
description: "fake-i686"
baseurl: "file://{{ repodir_i686 }}"
gpgcheck: no
when: install_repos | bool
- name: Create RPMs and put them into a repo (ppc64)
create_repo:
arch: ppc64
tempdir: "{{ remote_tmp_dir }}"
register: repo_ppc64
- set_fact:
repodir_ppc64: "{{ repo_ppc64.repo_dir }}"
- name: Install the repo (ppc64)
yum_repository:
name: "fake-ppc64"
description: "fake-ppc64"
baseurl: "file://{{ repodir_ppc64 }}"
gpgcheck: no
when: install_repos | bool
when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,413 |
Add RHEL 8.8 to ansible-test
|
### Summary
RHEL 8.8 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80413
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:42Z |
python
| 2023-06-15T01:16:28Z |
test/lib/ansible_test/_data/completion/remote.txt
|
alpine/3.17 python=3.10 become=doas_sudo provider=aws arch=x86_64
alpine become=doas_sudo provider=aws arch=x86_64
fedora/37 python=3.11 become=sudo provider=aws arch=x86_64
fedora become=sudo provider=aws arch=x86_64
freebsd/12.4 python=3.9 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.1 python=3.8,3.7,3.9,3.10 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.2 python=3.9,3.11 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
macos/13.2 python=3.11 python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
macos python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
rhel/7.9 python=2.7 become=sudo provider=aws arch=x86_64
rhel/8.7 python=3.6,3.8,3.9 become=sudo provider=aws arch=x86_64
rhel/9.1 python=3.9 become=sudo provider=aws arch=x86_64
rhel/9.2 python=3.9 become=sudo provider=aws arch=x86_64
rhel become=sudo provider=aws arch=x86_64
ubuntu/20.04 python=3.8,3.9 become=sudo provider=aws arch=x86_64
ubuntu/22.04 python=3.10 become=sudo provider=aws arch=x86_64
ubuntu become=sudo provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,413 |
Add RHEL 8.8 to ansible-test
|
### Summary
RHEL 8.8 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80413
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:42Z |
python
| 2023-06-15T01:16:28Z |
test/lib/ansible_test/_util/target/setup/bootstrap.sh
|
# shellcheck shell=sh
set -eu
install_ssh_keys()
{
if [ ! -f "${ssh_private_key_path}" ]; then
# write public/private ssh key pair
public_key_path="${ssh_private_key_path}.pub"
# shellcheck disable=SC2174
mkdir -m 0700 -p "${ssh_path}"
touch "${public_key_path}" "${ssh_private_key_path}"
chmod 0600 "${public_key_path}" "${ssh_private_key_path}"
echo "${ssh_public_key}" > "${public_key_path}"
echo "${ssh_private_key}" > "${ssh_private_key_path}"
# add public key to authorized_keys
authoried_keys_path="${HOME}/.ssh/authorized_keys"
# the existing file is overwritten to avoid conflicts (ex: RHEL on EC2 blocks root login)
cat "${public_key_path}" > "${authoried_keys_path}"
chmod 0600 "${authoried_keys_path}"
# add localhost's server keys to known_hosts
known_hosts_path="${HOME}/.ssh/known_hosts"
for key in /etc/ssh/ssh_host_*_key.pub; do
echo "localhost $(cat "${key}")" >> "${known_hosts_path}"
done
fi
}
customize_bashrc()
{
true > ~/.bashrc
# Show color `ls` results when available.
if ls --color > /dev/null 2>&1; then
echo "alias ls='ls --color'" >> ~/.bashrc
elif ls -G > /dev/null 2>&1; then
echo "alias ls='ls -G'" >> ~/.bashrc
fi
# Improve shell prompts for interactive use.
echo "export PS1='\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '" >> ~/.bashrc
}
install_pip() {
if ! "${python_interpreter}" -m pip.__main__ --version --disable-pip-version-check 2>/dev/null; then
case "${python_version}" in
"2.7")
pip_bootstrap_url="https://ci-files.testing.ansible.com/ansible-test/get-pip-20.3.4.py"
;;
*)
pip_bootstrap_url="https://ci-files.testing.ansible.com/ansible-test/get-pip-21.3.1.py"
;;
esac
while true; do
curl --silent --show-error "${pip_bootstrap_url}" -o /tmp/get-pip.py && \
"${python_interpreter}" /tmp/get-pip.py --disable-pip-version-check --quiet && \
rm /tmp/get-pip.py \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
fi
}
pip_install() {
pip_packages="$1"
while true; do
# shellcheck disable=SC2086
"${python_interpreter}" -m pip install --disable-pip-version-check ${pip_packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_alpine()
{
py_pkg_prefix="py3"
packages="
acl
bash
gcc
python3-dev
${py_pkg_prefix}-pip
sudo
"
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-packaging
${py_pkg_prefix}-yaml
${py_pkg_prefix}-jinja2
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
apk add -q ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_fedora()
{
py_pkg_prefix="python3"
packages="
acl
gcc
${py_pkg_prefix}-devel
"
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-jinja2
${py_pkg_prefix}-packaging
${py_pkg_prefix}-pyyaml
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
dnf install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_freebsd()
{
packages="
python${python_package_version}
py${python_package_version}-sqlite3
py${python_package_version}-setuptools
bash
curl
gtar
sudo
"
if [ "${controller}" ]; then
jinja2_pkg="py${python_package_version}-jinja2"
cryptography_pkg="py${python_package_version}-cryptography"
pyyaml_pkg="py${python_package_version}-yaml"
# Declare platform/python version combinations which do not have supporting OS packages available.
# For these combinations ansible-test will use pip to install the requirements instead.
case "${platform_version}/${python_version}" in
"12.4/3.9")
;;
*)
jinja2_pkg="" # not available
cryptography_pkg="" # not available
pyyaml_pkg="" # not available
;;
esac
packages="
${packages}
libyaml
${pyyaml_pkg}
${jinja2_pkg}
${cryptography_pkg}
"
fi
while true; do
# shellcheck disable=SC2086
env ASSUME_ALWAYS_YES=YES pkg bootstrap && \
pkg install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
install_pip
if ! grep '^PermitRootLogin yes$' /etc/ssh/sshd_config > /dev/null; then
sed -i '' 's/^# *PermitRootLogin.*$/PermitRootLogin yes/;' /etc/ssh/sshd_config
service sshd restart
fi
# make additional wheels available for packages which lack them for this platform
echo "# generated by ansible-test
[global]
extra-index-url = https://spare-tire.testing.ansible.com/simple/
prefer-binary = yes
" > /etc/pip.conf
# enable ACL support on the root filesystem (required for become between unprivileged users)
fs_path="/"
fs_device="$(mount -v "${fs_path}" | cut -w -f 1)"
# shellcheck disable=SC2001
fs_device_escaped=$(echo "${fs_device}" | sed 's|/|\\/|g')
mount -o acls "${fs_device}" "${fs_path}"
awk 'BEGIN{FS=" "}; /'"${fs_device_escaped}"'/ {gsub(/^rw$/,"rw,acls", $4); print; next} // {print}' /etc/fstab > /etc/fstab.new
mv /etc/fstab.new /etc/fstab
# enable sudo without a password for the wheel group, allowing ansible to use the sudo become plugin
echo '%wheel ALL=(ALL:ALL) NOPASSWD: ALL' > /usr/local/etc/sudoers.d/ansible-test
}
bootstrap_remote_macos()
{
# Silence macOS deprecation warning for bash.
echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bashrc
# Make sure ~/ansible/ is the starting directory for interactive shells on the control node.
# The root home directory is under a symlink. Without this the real path will be displayed instead.
if [ "${controller}" ]; then
echo "cd ~/ansible/" >> ~/.bashrc
fi
# Make sure commands like 'brew' can be found.
# This affects users with the 'zsh' shell, as well as 'root' accessed using 'sudo' from a user with 'zsh' for a shell.
# shellcheck disable=SC2016
echo 'PATH="/usr/local/bin:$PATH"' > /etc/zshenv
}
bootstrap_remote_rhel_7()
{
packages="
gcc
python-devel
python-virtualenv
"
while true; do
# shellcheck disable=SC2086
yum install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
install_pip
bootstrap_remote_rhel_pinned_pip_packages
}
bootstrap_remote_rhel_8()
{
if [ "${python_version}" = "3.6" ]; then
py_pkg_prefix="python3"
else
py_pkg_prefix="python${python_package_version}"
fi
packages="
gcc
${py_pkg_prefix}-devel
"
# Jinja2 is not installed with an OS package since the provided version is too old.
# Instead, ansible-test will install it using pip.
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
"
fi
while true; do
# shellcheck disable=SC2086
yum module install -q -y "python${python_package_version}" && \
yum install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
bootstrap_remote_rhel_pinned_pip_packages
}
bootstrap_remote_rhel_9()
{
py_pkg_prefix="python3"
packages="
gcc
${py_pkg_prefix}-devel
"
# Jinja2 is not installed with an OS package since the provided version is too old.
# Instead, ansible-test will install it using pip.
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-packaging
${py_pkg_prefix}-pyyaml
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
dnf install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_rhel()
{
case "${platform_version}" in
7.*) bootstrap_remote_rhel_7 ;;
8.*) bootstrap_remote_rhel_8 ;;
9.*) bootstrap_remote_rhel_9 ;;
esac
}
bootstrap_remote_rhel_pinned_pip_packages()
{
# pin packaging and pyparsing to match the downstream vendored versions
pip_packages="
packaging==20.4
pyparsing==2.4.7
"
pip_install "${pip_packages}"
}
bootstrap_remote_ubuntu()
{
py_pkg_prefix="python3"
packages="
acl
gcc
python${python_version}-dev
python3-pip
python${python_version}-venv
"
if [ "${controller}" ]; then
cryptography_pkg="${py_pkg_prefix}-cryptography"
jinja2_pkg="${py_pkg_prefix}-jinja2"
packaging_pkg="${py_pkg_prefix}-packaging"
pyyaml_pkg="${py_pkg_prefix}-yaml"
resolvelib_pkg="${py_pkg_prefix}-resolvelib"
# Declare platforms which do not have supporting OS packages available.
# For these ansible-test will use pip to install the requirements instead.
# Only the platform is checked since Ubuntu shares Python packages across Python versions.
case "${platform_version}" in
"20.04")
jinja2_pkg="" # too old
resolvelib_pkg="" # not available
;;
esac
packages="
${packages}
${cryptography_pkg}
${jinja2_pkg}
${packaging_pkg}
${pyyaml_pkg}
${resolvelib_pkg}
"
fi
while true; do
# shellcheck disable=SC2086
apt-get update -qq -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -qq -y --no-install-recommends ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
if [ "${controller}" ]; then
if [ "${platform_version}/${python_version}" = "20.04/3.9" ]; then
# Install pyyaml using pip so libyaml support is available on Python 3.9.
# The OS package install (which is installed by default) only has a .so file for Python 3.8.
pip_install "--upgrade pyyaml"
fi
fi
}
bootstrap_docker()
{
# Required for newer mysql-server packages to install/upgrade on Ubuntu 16.04.
rm -f /usr/sbin/policy-rc.d
}
bootstrap_remote()
{
for python_version in ${python_versions}; do
echo "Bootstrapping Python ${python_version}"
python_interpreter="python${python_version}"
python_package_version="$(echo "${python_version}" | tr -d '.')"
case "${platform}" in
"alpine") bootstrap_remote_alpine ;;
"fedora") bootstrap_remote_fedora ;;
"freebsd") bootstrap_remote_freebsd ;;
"macos") bootstrap_remote_macos ;;
"rhel") bootstrap_remote_rhel ;;
"ubuntu") bootstrap_remote_ubuntu ;;
esac
done
}
bootstrap()
{
ssh_path="${HOME}/.ssh"
ssh_private_key_path="${ssh_path}/id_${ssh_key_type}"
install_ssh_keys
customize_bashrc
# allow tests to detect ansible-test bootstrapped instances, as well as the bootstrap type
echo "${bootstrap_type}" > /etc/ansible-test.bootstrap
case "${bootstrap_type}" in
"docker") bootstrap_docker ;;
"remote") bootstrap_remote ;;
esac
}
# These variables will be templated before sending the script to the host.
# They are at the end of the script to maintain line numbers for debugging purposes.
bootstrap_type=#{bootstrap_type}
controller=#{controller}
platform=#{platform}
platform_version=#{platform_version}
python_versions=#{python_versions}
ssh_key_type=#{ssh_key_type}
ssh_private_key=#{ssh_private_key}
ssh_public_key=#{ssh_public_key}
bootstrap
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,412 |
Add RHEL 9.2 to ansible-test
|
### Summary
RHEL 9.2 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80412
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:41Z |
python
| 2023-06-15T01:16:28Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:4.0.1
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.7 py36
test: rhel/[email protected]
- name: RHEL 8.7 py39
test: rhel/[email protected]
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.1
test: freebsd/13.1
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.7
test: rhel/8.7
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,412 |
Add RHEL 9.2 to ansible-test
|
### Summary
RHEL 9.2 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80412
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:41Z |
python
| 2023-06-15T01:16:28Z |
changelogs/fragments/ansible-test-rhel-9.2-python-3.11.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,412 |
Add RHEL 9.2 to ansible-test
|
### Summary
RHEL 9.2 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80412
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:41Z |
python
| 2023-06-15T01:16:28Z |
test/integration/targets/setup_rpm_repo/tasks/main.yml
|
- block:
- name: Install epel repo which is missing on rhel-7 and is needed for rpmfluff
include_role:
name: setup_epel
when:
- ansible_distribution in ['RedHat', 'CentOS']
- ansible_distribution_major_version is version('7', '==')
- name: Include distribution specific variables
include_vars: "{{ lookup('first_found', params) }}"
vars:
params:
files:
- "{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_version }}.yml"
- "{{ ansible_facts.os_family }}-{{ ansible_facts.distribution_major_version }}.yml"
- "{{ ansible_facts.distribution }}.yml"
- "{{ ansible_facts.os_family }}.yml"
- default.yml
paths:
- "{{ role_path }}/vars"
- name: Install rpmfluff and deps
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: "{{ rpm_repo_packages }}"
- name: Install rpmfluff via pip
pip:
name: rpmfluff
when: ansible_facts.os_family == 'RedHat' and ansible_distribution_major_version is version('9', '==')
- set_fact:
repos:
- "fake-{{ ansible_architecture }}"
- "fake-i686"
- "fake-ppc64"
changed_when: yes
notify: remove repos
- name: Create RPMs and put them into a repo
create_repo:
arch: "{{ ansible_architecture }}"
tempdir: "{{ remote_tmp_dir }}"
register: repo
- set_fact:
repodir: "{{ repo.repo_dir }}"
- name: Install the repo
yum_repository:
name: "fake-{{ ansible_architecture }}"
description: "fake-{{ ansible_architecture }}"
baseurl: "file://{{ repodir }}"
gpgcheck: no
when: install_repos | bool
- name: Copy comps.xml file
copy:
src: comps.xml
dest: "{{ repodir }}"
register: repodir_comps
- name: Register comps.xml on repo
command: createrepo -g {{ repodir_comps.dest | quote }} {{ repodir | quote }}
- name: Create RPMs and put them into a repo (i686)
create_repo:
arch: i686
tempdir: "{{ remote_tmp_dir }}"
register: repo_i686
- set_fact:
repodir_i686: "{{ repo_i686.repo_dir }}"
- name: Install the repo (i686)
yum_repository:
name: "fake-i686"
description: "fake-i686"
baseurl: "file://{{ repodir_i686 }}"
gpgcheck: no
when: install_repos | bool
- name: Create RPMs and put them into a repo (ppc64)
create_repo:
arch: ppc64
tempdir: "{{ remote_tmp_dir }}"
register: repo_ppc64
- set_fact:
repodir_ppc64: "{{ repo_ppc64.repo_dir }}"
- name: Install the repo (ppc64)
yum_repository:
name: "fake-ppc64"
description: "fake-ppc64"
baseurl: "file://{{ repodir_ppc64 }}"
gpgcheck: no
when: install_repos | bool
when: ansible_distribution in ['RedHat', 'CentOS', 'ScientificLinux', 'Fedora']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,412 |
Add RHEL 9.2 to ansible-test
|
### Summary
RHEL 9.2 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80412
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:41Z |
python
| 2023-06-15T01:16:28Z |
test/lib/ansible_test/_data/completion/remote.txt
|
alpine/3.17 python=3.10 become=doas_sudo provider=aws arch=x86_64
alpine become=doas_sudo provider=aws arch=x86_64
fedora/37 python=3.11 become=sudo provider=aws arch=x86_64
fedora become=sudo provider=aws arch=x86_64
freebsd/12.4 python=3.9 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.1 python=3.8,3.7,3.9,3.10 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.2 python=3.9,3.11 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
macos/13.2 python=3.11 python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
macos python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
rhel/7.9 python=2.7 become=sudo provider=aws arch=x86_64
rhel/8.7 python=3.6,3.8,3.9 become=sudo provider=aws arch=x86_64
rhel/9.1 python=3.9 become=sudo provider=aws arch=x86_64
rhel/9.2 python=3.9 become=sudo provider=aws arch=x86_64
rhel become=sudo provider=aws arch=x86_64
ubuntu/20.04 python=3.8,3.9 become=sudo provider=aws arch=x86_64
ubuntu/22.04 python=3.10 become=sudo provider=aws arch=x86_64
ubuntu become=sudo provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,412 |
Add RHEL 9.2 to ansible-test
|
### Summary
RHEL 9.2 Beta was [announced](https://access.redhat.com/announcements/7003578) on March 29th. Based on past releases, it could be available in May. This is a remote VM addition.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80412
|
https://github.com/ansible/ansible/pull/80990
|
c1bc445aa708eab0656b660eb699db4c735d86ec
|
cde15f3c8158467a96023a9cffcba4bc0a207b0f
| 2023-04-05T21:27:41Z |
python
| 2023-06-15T01:16:28Z |
test/lib/ansible_test/_util/target/setup/bootstrap.sh
|
# shellcheck shell=sh
set -eu
install_ssh_keys()
{
if [ ! -f "${ssh_private_key_path}" ]; then
# write public/private ssh key pair
public_key_path="${ssh_private_key_path}.pub"
# shellcheck disable=SC2174
mkdir -m 0700 -p "${ssh_path}"
touch "${public_key_path}" "${ssh_private_key_path}"
chmod 0600 "${public_key_path}" "${ssh_private_key_path}"
echo "${ssh_public_key}" > "${public_key_path}"
echo "${ssh_private_key}" > "${ssh_private_key_path}"
# add public key to authorized_keys
authoried_keys_path="${HOME}/.ssh/authorized_keys"
# the existing file is overwritten to avoid conflicts (ex: RHEL on EC2 blocks root login)
cat "${public_key_path}" > "${authoried_keys_path}"
chmod 0600 "${authoried_keys_path}"
# add localhost's server keys to known_hosts
known_hosts_path="${HOME}/.ssh/known_hosts"
for key in /etc/ssh/ssh_host_*_key.pub; do
echo "localhost $(cat "${key}")" >> "${known_hosts_path}"
done
fi
}
customize_bashrc()
{
true > ~/.bashrc
# Show color `ls` results when available.
if ls --color > /dev/null 2>&1; then
echo "alias ls='ls --color'" >> ~/.bashrc
elif ls -G > /dev/null 2>&1; then
echo "alias ls='ls -G'" >> ~/.bashrc
fi
# Improve shell prompts for interactive use.
echo "export PS1='\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '" >> ~/.bashrc
}
install_pip() {
if ! "${python_interpreter}" -m pip.__main__ --version --disable-pip-version-check 2>/dev/null; then
case "${python_version}" in
"2.7")
pip_bootstrap_url="https://ci-files.testing.ansible.com/ansible-test/get-pip-20.3.4.py"
;;
*)
pip_bootstrap_url="https://ci-files.testing.ansible.com/ansible-test/get-pip-21.3.1.py"
;;
esac
while true; do
curl --silent --show-error "${pip_bootstrap_url}" -o /tmp/get-pip.py && \
"${python_interpreter}" /tmp/get-pip.py --disable-pip-version-check --quiet && \
rm /tmp/get-pip.py \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
fi
}
pip_install() {
pip_packages="$1"
while true; do
# shellcheck disable=SC2086
"${python_interpreter}" -m pip install --disable-pip-version-check ${pip_packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_alpine()
{
py_pkg_prefix="py3"
packages="
acl
bash
gcc
python3-dev
${py_pkg_prefix}-pip
sudo
"
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-packaging
${py_pkg_prefix}-yaml
${py_pkg_prefix}-jinja2
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
apk add -q ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_fedora()
{
py_pkg_prefix="python3"
packages="
acl
gcc
${py_pkg_prefix}-devel
"
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-jinja2
${py_pkg_prefix}-packaging
${py_pkg_prefix}-pyyaml
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
dnf install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_freebsd()
{
packages="
python${python_package_version}
py${python_package_version}-sqlite3
py${python_package_version}-setuptools
bash
curl
gtar
sudo
"
if [ "${controller}" ]; then
jinja2_pkg="py${python_package_version}-jinja2"
cryptography_pkg="py${python_package_version}-cryptography"
pyyaml_pkg="py${python_package_version}-yaml"
# Declare platform/python version combinations which do not have supporting OS packages available.
# For these combinations ansible-test will use pip to install the requirements instead.
case "${platform_version}/${python_version}" in
"12.4/3.9")
;;
*)
jinja2_pkg="" # not available
cryptography_pkg="" # not available
pyyaml_pkg="" # not available
;;
esac
packages="
${packages}
libyaml
${pyyaml_pkg}
${jinja2_pkg}
${cryptography_pkg}
"
fi
while true; do
# shellcheck disable=SC2086
env ASSUME_ALWAYS_YES=YES pkg bootstrap && \
pkg install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
install_pip
if ! grep '^PermitRootLogin yes$' /etc/ssh/sshd_config > /dev/null; then
sed -i '' 's/^# *PermitRootLogin.*$/PermitRootLogin yes/;' /etc/ssh/sshd_config
service sshd restart
fi
# make additional wheels available for packages which lack them for this platform
echo "# generated by ansible-test
[global]
extra-index-url = https://spare-tire.testing.ansible.com/simple/
prefer-binary = yes
" > /etc/pip.conf
# enable ACL support on the root filesystem (required for become between unprivileged users)
fs_path="/"
fs_device="$(mount -v "${fs_path}" | cut -w -f 1)"
# shellcheck disable=SC2001
fs_device_escaped=$(echo "${fs_device}" | sed 's|/|\\/|g')
mount -o acls "${fs_device}" "${fs_path}"
awk 'BEGIN{FS=" "}; /'"${fs_device_escaped}"'/ {gsub(/^rw$/,"rw,acls", $4); print; next} // {print}' /etc/fstab > /etc/fstab.new
mv /etc/fstab.new /etc/fstab
# enable sudo without a password for the wheel group, allowing ansible to use the sudo become plugin
echo '%wheel ALL=(ALL:ALL) NOPASSWD: ALL' > /usr/local/etc/sudoers.d/ansible-test
}
bootstrap_remote_macos()
{
# Silence macOS deprecation warning for bash.
echo "export BASH_SILENCE_DEPRECATION_WARNING=1" >> ~/.bashrc
# Make sure ~/ansible/ is the starting directory for interactive shells on the control node.
# The root home directory is under a symlink. Without this the real path will be displayed instead.
if [ "${controller}" ]; then
echo "cd ~/ansible/" >> ~/.bashrc
fi
# Make sure commands like 'brew' can be found.
# This affects users with the 'zsh' shell, as well as 'root' accessed using 'sudo' from a user with 'zsh' for a shell.
# shellcheck disable=SC2016
echo 'PATH="/usr/local/bin:$PATH"' > /etc/zshenv
}
bootstrap_remote_rhel_7()
{
packages="
gcc
python-devel
python-virtualenv
"
while true; do
# shellcheck disable=SC2086
yum install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
install_pip
bootstrap_remote_rhel_pinned_pip_packages
}
bootstrap_remote_rhel_8()
{
if [ "${python_version}" = "3.6" ]; then
py_pkg_prefix="python3"
else
py_pkg_prefix="python${python_package_version}"
fi
packages="
gcc
${py_pkg_prefix}-devel
"
# Jinja2 is not installed with an OS package since the provided version is too old.
# Instead, ansible-test will install it using pip.
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
"
fi
while true; do
# shellcheck disable=SC2086
yum module install -q -y "python${python_package_version}" && \
yum install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
bootstrap_remote_rhel_pinned_pip_packages
}
bootstrap_remote_rhel_9()
{
py_pkg_prefix="python3"
packages="
gcc
${py_pkg_prefix}-devel
"
# Jinja2 is not installed with an OS package since the provided version is too old.
# Instead, ansible-test will install it using pip.
if [ "${controller}" ]; then
packages="
${packages}
${py_pkg_prefix}-cryptography
${py_pkg_prefix}-packaging
${py_pkg_prefix}-pyyaml
${py_pkg_prefix}-resolvelib
"
fi
while true; do
# shellcheck disable=SC2086
dnf install -q -y ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
}
bootstrap_remote_rhel()
{
case "${platform_version}" in
7.*) bootstrap_remote_rhel_7 ;;
8.*) bootstrap_remote_rhel_8 ;;
9.*) bootstrap_remote_rhel_9 ;;
esac
}
bootstrap_remote_rhel_pinned_pip_packages()
{
# pin packaging and pyparsing to match the downstream vendored versions
pip_packages="
packaging==20.4
pyparsing==2.4.7
"
pip_install "${pip_packages}"
}
bootstrap_remote_ubuntu()
{
py_pkg_prefix="python3"
packages="
acl
gcc
python${python_version}-dev
python3-pip
python${python_version}-venv
"
if [ "${controller}" ]; then
cryptography_pkg="${py_pkg_prefix}-cryptography"
jinja2_pkg="${py_pkg_prefix}-jinja2"
packaging_pkg="${py_pkg_prefix}-packaging"
pyyaml_pkg="${py_pkg_prefix}-yaml"
resolvelib_pkg="${py_pkg_prefix}-resolvelib"
# Declare platforms which do not have supporting OS packages available.
# For these ansible-test will use pip to install the requirements instead.
# Only the platform is checked since Ubuntu shares Python packages across Python versions.
case "${platform_version}" in
"20.04")
jinja2_pkg="" # too old
resolvelib_pkg="" # not available
;;
esac
packages="
${packages}
${cryptography_pkg}
${jinja2_pkg}
${packaging_pkg}
${pyyaml_pkg}
${resolvelib_pkg}
"
fi
while true; do
# shellcheck disable=SC2086
apt-get update -qq -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -qq -y --no-install-recommends ${packages} \
&& break
echo "Failed to install packages. Sleeping before trying again..."
sleep 10
done
if [ "${controller}" ]; then
if [ "${platform_version}/${python_version}" = "20.04/3.9" ]; then
# Install pyyaml using pip so libyaml support is available on Python 3.9.
# The OS package install (which is installed by default) only has a .so file for Python 3.8.
pip_install "--upgrade pyyaml"
fi
fi
}
bootstrap_docker()
{
# Required for newer mysql-server packages to install/upgrade on Ubuntu 16.04.
rm -f /usr/sbin/policy-rc.d
}
bootstrap_remote()
{
for python_version in ${python_versions}; do
echo "Bootstrapping Python ${python_version}"
python_interpreter="python${python_version}"
python_package_version="$(echo "${python_version}" | tr -d '.')"
case "${platform}" in
"alpine") bootstrap_remote_alpine ;;
"fedora") bootstrap_remote_fedora ;;
"freebsd") bootstrap_remote_freebsd ;;
"macos") bootstrap_remote_macos ;;
"rhel") bootstrap_remote_rhel ;;
"ubuntu") bootstrap_remote_ubuntu ;;
esac
done
}
bootstrap()
{
ssh_path="${HOME}/.ssh"
ssh_private_key_path="${ssh_path}/id_${ssh_key_type}"
install_ssh_keys
customize_bashrc
# allow tests to detect ansible-test bootstrapped instances, as well as the bootstrap type
echo "${bootstrap_type}" > /etc/ansible-test.bootstrap
case "${bootstrap_type}" in
"docker") bootstrap_docker ;;
"remote") bootstrap_remote ;;
esac
}
# These variables will be templated before sending the script to the host.
# They are at the end of the script to maintain line numbers for debugging purposes.
bootstrap_type=#{bootstrap_type}
controller=#{controller}
platform=#{platform}
platform_version=#{platform_version}
python_versions=#{python_versions}
ssh_key_type=#{ssh_key_type}
ssh_private_key=#{ssh_private_key}
ssh_public_key=#{ssh_public_key}
bootstrap
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,408 |
Add Fedora 38 to ansible-test
|
### Summary
Fedora 38 is [expected](https://fedorapeople.org/groups/schedule/f-38/f-38-key-tasks.html) to be released on April 18th. This is a remote VM and container addition.
Fedora 38 has been [released](https://www.redhat.com/en/blog/announcing-fedora-linux-38).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80408
|
https://github.com/ansible/ansible/pull/81074
|
56b67cccc52312366b9ceed02a6906452864e04d
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
| 2023-04-05T21:22:03Z |
python
| 2023-06-20T18:24:08Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:4.0.1
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.8 py36
test: rhel/[email protected]
- name: RHEL 8.8 py311
test: rhel/[email protected]
- name: RHEL 9.2 py39
test: rhel/[email protected]
- name: RHEL 9.2 py311
test: rhel/[email protected]
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.8
test: rhel/8.8
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: RHEL 8.8
test: rhel/8.8
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,408 |
Add Fedora 38 to ansible-test
|
### Summary
Fedora 38 is [expected](https://fedorapeople.org/groups/schedule/f-38/f-38-key-tasks.html) to be released on April 18th. This is a remote VM and container addition.
Fedora 38 has been [released](https://www.redhat.com/en/blog/announcing-fedora-linux-38).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80408
|
https://github.com/ansible/ansible/pull/81074
|
56b67cccc52312366b9ceed02a6906452864e04d
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
| 2023-04-05T21:22:03Z |
python
| 2023-06-20T18:24:08Z |
changelogs/fragments/ansible-test-added-fedora-38.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,408 |
Add Fedora 38 to ansible-test
|
### Summary
Fedora 38 is [expected](https://fedorapeople.org/groups/schedule/f-38/f-38-key-tasks.html) to be released on April 18th. This is a remote VM and container addition.
Fedora 38 has been [released](https://www.redhat.com/en/blog/announcing-fedora-linux-38).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80408
|
https://github.com/ansible/ansible/pull/81074
|
56b67cccc52312366b9ceed02a6906452864e04d
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
| 2023-04-05T21:22:03Z |
python
| 2023-06-20T18:24:08Z |
test/lib/ansible_test/_data/completion/docker.txt
|
base image=quay.io/ansible/base-test-container:4.1.0 python=3.11,2.7,3.6,3.7,3.8,3.9,3.10
default image=quay.io/ansible/default-test-container:8.2.0 python=3.11,2.7,3.6,3.7,3.8,3.9,3.10 context=collection
default image=quay.io/ansible/ansible-core-test-container:8.2.0 python=3.11,2.7,3.6,3.7,3.8,3.9,3.10 context=ansible-core
alpine3 image=quay.io/ansible/alpine3-test-container:5.0.0 python=3.10 cgroup=none audit=none
centos7 image=quay.io/ansible/centos7-test-container:5.0.0 python=2.7 cgroup=v1-only
fedora37 image=quay.io/ansible/fedora37-test-container:5.0.0 python=3.11
opensuse15 image=quay.io/ansible/opensuse15-test-container:6.0.0 python=3.6
ubuntu2004 image=quay.io/ansible/ubuntu2004-test-container:5.0.0 python=3.8
ubuntu2204 image=quay.io/ansible/ubuntu2204-test-container:5.0.0 python=3.10
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,408 |
Add Fedora 38 to ansible-test
|
### Summary
Fedora 38 is [expected](https://fedorapeople.org/groups/schedule/f-38/f-38-key-tasks.html) to be released on April 18th. This is a remote VM and container addition.
Fedora 38 has been [released](https://www.redhat.com/en/blog/announcing-fedora-linux-38).
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80408
|
https://github.com/ansible/ansible/pull/81074
|
56b67cccc52312366b9ceed02a6906452864e04d
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
| 2023-04-05T21:22:03Z |
python
| 2023-06-20T18:24:08Z |
test/lib/ansible_test/_data/completion/remote.txt
|
alpine/3.17 python=3.10 become=doas_sudo provider=aws arch=x86_64
alpine become=doas_sudo provider=aws arch=x86_64
fedora/37 python=3.11 become=sudo provider=aws arch=x86_64
fedora become=sudo provider=aws arch=x86_64
freebsd/12.4 python=3.9 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd/13.2 python=3.9,3.11 python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
freebsd python_dir=/usr/local/bin become=su_sudo provider=aws arch=x86_64
macos/13.2 python=3.11 python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
macos python_dir=/usr/local/bin become=sudo provider=parallels arch=x86_64
rhel/7.9 python=2.7 become=sudo provider=aws arch=x86_64
rhel/8.7 python=3.6,3.8,3.9 become=sudo provider=aws arch=x86_64
rhel/8.8 python=3.6,3.11 become=sudo provider=aws arch=x86_64
rhel/9.1 python=3.9 become=sudo provider=aws arch=x86_64
rhel/9.2 python=3.9,3.11 become=sudo provider=aws arch=x86_64
rhel become=sudo provider=aws arch=x86_64
ubuntu/20.04 python=3.8,3.9 become=sudo provider=aws arch=x86_64
ubuntu/22.04 python=3.10 become=sudo provider=aws arch=x86_64
ubuntu become=sudo provider=aws arch=x86_64
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,853 |
Remove Ubuntu 20.04 VM from ansible-test
|
### Summary
Remove Ubuntu 20.04 VM support from ansible-test.
It is only used for controller-side testing and does not provide Python 3.10.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80853
|
https://github.com/ansible/ansible/pull/81070
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
|
c69951daca81930da175e308432105db052104d5
| 2023-05-19T20:58:00Z |
python
| 2023-06-20T18:25:12Z |
.azure-pipelines/azure-pipelines.yml
|
trigger:
batch: true
branches:
include:
- devel
- stable-*
pr:
autoCancel: true
branches:
include:
- devel
- stable-*
schedules:
- cron: 0 7 * * *
displayName: Nightly
always: true
branches:
include:
- devel
- stable-*
variables:
- name: checkoutPath
value: ansible
- name: coverageBranches
value: devel
- name: entryPoint
value: .azure-pipelines/commands/entry-point.sh
- name: fetchDepth
value: 500
- name: defaultContainer
value: quay.io/ansible/azure-pipelines-test-container:4.0.1
pool: Standard
stages:
- stage: Sanity
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- stage: Units
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: units/{0}
targets:
- test: 2.7
- test: 3.6
- test: 3.7
- test: 3.8
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: windows/{0}/1
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Remote
dependsOn: []
jobs:
- template: templates/matrix.yml # context/target
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 7.9
test: rhel/7.9
- name: RHEL 8.8 py36
test: rhel/[email protected]
- name: RHEL 8.8 py311
test: rhel/[email protected]
- name: RHEL 9.2 py39
test: rhel/[email protected]
- name: RHEL 9.2 py311
test: rhel/[email protected]
- name: FreeBSD 12.4
test: freebsd/12.4
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 1
- 2
- template: templates/matrix.yml # context/controller
parameters:
targets:
- name: macOS 13.2
test: macos/13.2
- name: RHEL 8.8
test: rhel/8.8
- name: RHEL 9.2
test: rhel/9.2
- name: FreeBSD 13.2
test: freebsd/13.2
groups:
- 3
- 4
- 5
- template: templates/matrix.yml # context/controller (ansible-test container management)
parameters:
targets:
- name: Alpine 3.17
test: alpine/3.17
- name: Fedora 37
test: fedora/37
- name: Fedora 38
test: fedora/38
- name: RHEL 8.8
test: rhel/8.8
- name: RHEL 9.2
test: rhel/9.2
- name: Ubuntu 20.04
test: ubuntu/20.04
- name: Ubuntu 22.04
test: ubuntu/22.04
groups:
- 6
- stage: Docker
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: CentOS 7
test: centos7
- name: Fedora 37
test: fedora37
- name: Fedora 38
test: fedora38
- name: openSUSE 15
test: opensuse15
- name: Ubuntu 20.04
test: ubuntu2004
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 1
- 2
- template: templates/matrix.yml
parameters:
testFormat: linux/{0}
targets:
- name: Alpine 3
test: alpine3
- name: Fedora 37
test: fedora37
- name: Fedora 38
test: fedora38
- name: Ubuntu 22.04
test: ubuntu2204
groups:
- 3
- 4
- 5
- stage: Galaxy
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: galaxy/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Generic
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: generic/{0}/1
targets:
- test: 3.9
- test: '3.10'
- test: 3.11
- stage: Incidental_Windows
displayName: Incidental Windows
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Server {0}
testFormat: i/windows/{0}
targets:
- test: 2016
- test: 2019
- test: 2022
- stage: Incidental
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: i/{0}/1
targets:
- name: IOS Python
test: ios/csr1000v/
- name: VyOS Python
test: vyos/1.1.8/
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity
- Units
- Windows
- Remote
- Docker
- Galaxy
- Generic
- Incidental_Windows
- Incidental
jobs:
- template: templates/coverage.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 80,853 |
Remove Ubuntu 20.04 VM from ansible-test
|
### Summary
Remove Ubuntu 20.04 VM support from ansible-test.
It is only used for controller-side testing and does not provide Python 3.10.
### Issue Type
Feature Idea
### Component Name
`ansible-test`
|
https://github.com/ansible/ansible/issues/80853
|
https://github.com/ansible/ansible/pull/81070
|
bc68ae8b977b21cf3b4636c252fe97d1bc8d917b
|
c69951daca81930da175e308432105db052104d5
| 2023-05-19T20:58:00Z |
python
| 2023-06-20T18:25:12Z |
changelogs/fragments/ansible-test-remove-ubuntu-2004.yml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.