status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,975 |
Hashes are merged instead of replace
|
##### SUMMARY
Inventory merges hashes instead of replacing them. If you have the same hash variable in 2 different file variables will be merged.
hash_behaviour is not configured, the default "replace".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Inventory plugin
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = None
configured module search path = ['/<snipped>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <snipped>/lib64/python3.9/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
```
Also effects:
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/<snipped>.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /<snipped>/lib64/python3.6/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Tested on the latest Fedora 33 and CentOs7
##### STEPS TO REPRODUCE
Create the following two inventory file:
Inventory file1:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: value1
key2: value2
```
Inventory file2:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: other_value1
```
Then execute the following command:
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### EXPECTED RESULTS
_"key2": "value2"_ shouldn't be there because the second test_hash variable should replace the first one.
This was the default behavior before this bug.
``` bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### ACTUAL RESULTS
As you can see the hashes are merged instead of replacing.
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
|
https://github.com/ansible/ansible/issues/72975
|
https://github.com/ansible/ansible/pull/72979
|
6487a239c0a085041a6c421bced5c354e4a94290
|
5e03e322de5b43b69c8aad5c0cb92e82ce0f3d17
| 2020-12-15T11:49:49Z |
python
| 2020-12-16T16:23:23Z |
lib/ansible/inventory/group.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from itertools import chain
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
display = Display()
def to_safe_group_name(name, replacer="_", force=False, silent=False):
# Converts 'bad' characters in a string to underscores (or provided replacer) so they can be used as Ansible hosts or groups
warn = ''
if name: # when deserializing we might not have name yet
invalid_chars = C.INVALID_VARIABLE_NAMES.findall(name)
if invalid_chars:
msg = 'invalid character(s) "%s" in group name (%s)' % (to_text(set(invalid_chars)), to_text(name))
if C.TRANSFORM_INVALID_GROUP_CHARS not in ('never', 'ignore') or force:
name = C.INVALID_VARIABLE_NAMES.sub(replacer, name)
if not (silent or C.TRANSFORM_INVALID_GROUP_CHARS == 'silently'):
display.vvvv('Replacing ' + msg)
warn = 'Invalid characters were found in group names and automatically replaced, use -vvvv to see details'
else:
if C.TRANSFORM_INVALID_GROUP_CHARS == 'never':
display.vvvv('Not replacing %s' % msg)
warn = 'Invalid characters were found in group names but not replaced, use -vvvv to see details'
if warn:
display.warning(warn)
return name
class Group:
''' a group of ansible hosts '''
# __slots__ = [ 'name', 'hosts', 'vars', 'child_groups', 'parent_groups', 'depth', '_hosts_cache' ]
def __init__(self, name=None):
self.depth = 0
self.name = to_safe_group_name(name)
self.hosts = []
self._hosts = None
self.vars = {}
self.child_groups = []
self.parent_groups = []
self._hosts_cache = None
self.priority = 1
def __repr__(self):
return self.get_name()
def __str__(self):
return self.get_name()
def __getstate__(self):
return self.serialize()
def __setstate__(self, data):
return self.deserialize(data)
def serialize(self):
parent_groups = []
for parent in self.parent_groups:
parent_groups.append(parent.serialize())
self._hosts = None
result = dict(
name=self.name,
vars=self.vars.copy(),
parent_groups=parent_groups,
depth=self.depth,
hosts=self.hosts,
)
return result
def deserialize(self, data):
self.__init__()
self.name = data.get('name')
self.vars = data.get('vars', dict())
self.depth = data.get('depth', 0)
self.hosts = data.get('hosts', [])
self._hosts = None
parent_groups = data.get('parent_groups', [])
for parent_data in parent_groups:
g = Group()
g.deserialize(parent_data)
self.parent_groups.append(g)
def _walk_relationship(self, rel, include_self=False, preserve_ordering=False):
'''
Given `rel` that is an iterable property of Group,
consitituting a directed acyclic graph among all groups,
Returns a set of all groups in full tree
A B C
| / | /
| / | /
D -> E
| / vertical connections
| / are directed upward
F
Called on F, returns set of (A, B, C, D, E)
'''
seen = set([])
unprocessed = set(getattr(self, rel))
if include_self:
unprocessed.add(self)
if preserve_ordering:
ordered = [self] if include_self else []
ordered.extend(getattr(self, rel))
while unprocessed:
seen.update(unprocessed)
new_unprocessed = set([])
for new_item in chain.from_iterable(getattr(g, rel) for g in unprocessed):
new_unprocessed.add(new_item)
if preserve_ordering:
if new_item not in seen:
ordered.append(new_item)
new_unprocessed.difference_update(seen)
unprocessed = new_unprocessed
if preserve_ordering:
return ordered
return seen
def get_ancestors(self):
return self._walk_relationship('parent_groups')
def get_descendants(self, **kwargs):
return self._walk_relationship('child_groups', **kwargs)
@property
def host_names(self):
if self._hosts is None:
self._hosts = set(self.hosts)
return self._hosts
def get_name(self):
return self.name
def add_child_group(self, group):
added = False
if self == group:
raise Exception("can't add group to itself")
# don't add if it's already there
if group not in self.child_groups:
# prepare list of group's new ancestors this edge creates
start_ancestors = group.get_ancestors()
new_ancestors = self.get_ancestors()
if group in new_ancestors:
raise AnsibleError("Adding group '%s' as child to '%s' creates a recursive dependency loop." % (to_native(group.name), to_native(self.name)))
new_ancestors.add(self)
new_ancestors.difference_update(start_ancestors)
added = True
self.child_groups.append(group)
# update the depth of the child
group.depth = max([self.depth + 1, group.depth])
# update the depth of the grandchildren
group._check_children_depth()
# now add self to child's parent_groups list, but only if there
# isn't already a group with the same name
if self.name not in [g.name for g in group.parent_groups]:
group.parent_groups.append(self)
for h in group.get_hosts():
h.populate_ancestors(additions=new_ancestors)
self.clear_hosts_cache()
return added
def _check_children_depth(self):
depth = self.depth
start_depth = self.depth # self.depth could change over loop
seen = set([])
unprocessed = set(self.child_groups)
while unprocessed:
seen.update(unprocessed)
depth += 1
to_process = unprocessed.copy()
unprocessed = set([])
for g in to_process:
if g.depth < depth:
g.depth = depth
unprocessed.update(g.child_groups)
if depth - start_depth > len(seen):
raise AnsibleError("The group named '%s' has a recursive dependency loop." % to_native(self.name))
def add_host(self, host):
added = False
if host.name not in self.host_names:
self.hosts.append(host)
self._hosts.add(host.name)
host.add_group(self)
self.clear_hosts_cache()
added = True
return added
def remove_host(self, host):
removed = False
if host.name in self.host_names:
self.hosts.remove(host)
self._hosts.remove(host.name)
host.remove_group(self)
self.clear_hosts_cache()
removed = True
return removed
def set_variable(self, key, value):
if key == 'ansible_group_priority':
self.set_priority(int(value))
else:
if key in self.vars and isinstance(self.vars[key], MutableMapping) and isinstance(value, Mapping):
self.vars[key] = combine_vars(self.vars[key], value)
else:
self.vars[key] = value
def clear_hosts_cache(self):
self._hosts_cache = None
for g in self.get_ancestors():
g._hosts_cache = None
def get_hosts(self):
if self._hosts_cache is None:
self._hosts_cache = self._get_hosts()
return self._hosts_cache
def _get_hosts(self):
hosts = []
seen = {}
for kid in self.get_descendants(include_self=True, preserve_ordering=True):
kid_hosts = kid.hosts
for kk in kid_hosts:
if kk not in seen:
seen[kk] = 1
if self.name == 'all' and kk.implicit:
continue
hosts.append(kk)
return hosts
def get_vars(self):
return self.vars.copy()
def set_priority(self, priority):
try:
self.priority = int(priority)
except TypeError:
# FIXME: warn about invalid priority
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,975 |
Hashes are merged instead of replace
|
##### SUMMARY
Inventory merges hashes instead of replacing them. If you have the same hash variable in 2 different file variables will be merged.
hash_behaviour is not configured, the default "replace".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Inventory plugin
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = None
configured module search path = ['/<snipped>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = <snipped>/lib64/python3.9/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.9.0 (default, Oct 6 2020, 00:00:00) [GCC 10.2.1 20200826 (Red Hat 10.2.1-3)]
```
Also effects:
```
ansible 2.10.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/<snipped>.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /<snipped>/lib64/python3.6/site-packages/ansible
executable location = <snipped>/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
```
##### OS / ENVIRONMENT
Tested on the latest Fedora 33 and CentOs7
##### STEPS TO REPRODUCE
Create the following two inventory file:
Inventory file1:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: value1
key2: value2
```
Inventory file2:
```yaml
all:
hosts:
localhost:
vars:
test_hash:
key1: other_value1
```
Then execute the following command:
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### EXPECTED RESULTS
_"key2": "value2"_ shouldn't be there because the second test_hash variable should replace the first one.
This was the default behavior before this bug.
``` bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
##### ACTUAL RESULTS
As you can see the hashes are merged instead of replacing.
```bash
$ ansible-inventory -i ./test_inventory1.yml -i ./test_inventory2.yml --list
{
"_meta": {
"hostvars": {
"localhost": {
"test_hash": {
"key1": "other_value1",
"key2": "value2"
}
}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"localhost"
]
}
}
```
|
https://github.com/ansible/ansible/issues/72975
|
https://github.com/ansible/ansible/pull/72979
|
6487a239c0a085041a6c421bced5c354e4a94290
|
5e03e322de5b43b69c8aad5c0cb92e82ce0f3d17
| 2020-12-15T11:49:49Z |
python
| 2020-12-16T16:23:23Z |
lib/ansible/inventory/host.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.inventory.group import Group
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping
from ansible.utils.vars import combine_vars, get_unique_id
__all__ = ['Host']
class Host:
''' a single ansible host '''
# __slots__ = [ 'name', 'vars', 'groups' ]
def __getstate__(self):
return self.serialize()
def __setstate__(self, data):
return self.deserialize(data)
def __eq__(self, other):
if not isinstance(other, Host):
return False
return self._uuid == other._uuid
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.name)
def __str__(self):
return self.get_name()
def __repr__(self):
return self.get_name()
def serialize(self):
groups = []
for group in self.groups:
groups.append(group.serialize())
return dict(
name=self.name,
vars=self.vars.copy(),
address=self.address,
uuid=self._uuid,
groups=groups,
implicit=self.implicit,
)
def deserialize(self, data):
self.__init__(gen_uuid=False)
self.name = data.get('name')
self.vars = data.get('vars', dict())
self.address = data.get('address', '')
self._uuid = data.get('uuid', None)
self.implicit = data.get('implicit', False)
groups = data.get('groups', [])
for group_data in groups:
g = Group()
g.deserialize(group_data)
self.groups.append(g)
def __init__(self, name=None, port=None, gen_uuid=True):
self.vars = {}
self.groups = []
self._uuid = None
self.name = name
self.address = name
if port:
self.set_variable('ansible_port', int(port))
if gen_uuid:
self._uuid = get_unique_id()
self.implicit = False
def get_name(self):
return self.name
def populate_ancestors(self, additions=None):
# populate ancestors
if additions is None:
for group in self.groups:
self.add_group(group)
else:
for group in additions:
if group not in self.groups:
self.groups.append(group)
def add_group(self, group):
added = False
# populate ancestors first
for oldg in group.get_ancestors():
if oldg not in self.groups:
self.groups.append(oldg)
# actually add group
if group not in self.groups:
self.groups.append(group)
added = True
return added
def remove_group(self, group):
removed = False
if group in self.groups:
self.groups.remove(group)
removed = True
# remove exclusive ancestors, xcept all!
for oldg in group.get_ancestors():
if oldg.name != 'all':
for childg in self.groups:
if oldg in childg.get_ancestors():
break
else:
self.remove_group(oldg)
return removed
def set_variable(self, key, value):
if key in self.vars and isinstance(self.vars[key], MutableMapping) and isinstance(value, Mapping):
self.vars[key] = combine_vars(self.vars[key], value)
else:
self.vars[key] = value
def get_groups(self):
return self.groups
def get_magic_vars(self):
results = {}
results['inventory_hostname'] = self.name
results['inventory_hostname_short'] = self.name.split('.')[0]
results['group_names'] = sorted([g.name for g in self.get_groups() if g.name != 'all'])
return results
def get_vars(self):
return combine_vars(self.vars, self.get_magic_vars())
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,745 |
Wrong deprecation for passing extra variables on 'import_playbook'
|
##### SUMMARY
The PR https://github.com/ansible/ansible/pull/64156 deprecated passing extra variables on `import_playbook`, which was wrongly stated to have never worked before, but it does, as shown in the example below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `import_playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/goetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[no output]
```
##### OS / ENVIRONMENT
- GNU/Linux
##### STEPS TO REPRODUCE
`playbooks/site.yaml`:
```yaml
---
- hosts: "{{ target }}"
roles:
- role: my-role
```
`playbooks/dev.yaml`:
```yaml
---
- import_playbook: site.yml target='dev'
```
```sh
ansible-playbook playbooks/dev.yml
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook runs without warnings or errors.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbooks shows a Warning about this behavior going to fail in a future Ansible minor version update.
```sh
[WARNING]: Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14
```
|
https://github.com/ansible/ansible/issues/72745
|
https://github.com/ansible/ansible/pull/72987
|
dbc2c996ab361151fce8d1244f67413eb27aa50c
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
| 2020-11-27T22:24:01Z |
python
| 2020-12-17T19:14:58Z |
changelogs/fragments/72745-import_playbook-deprecation-extra-params.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,745 |
Wrong deprecation for passing extra variables on 'import_playbook'
|
##### SUMMARY
The PR https://github.com/ansible/ansible/pull/64156 deprecated passing extra variables on `import_playbook`, which was wrongly stated to have never worked before, but it does, as shown in the example below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `import_playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/goetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[no output]
```
##### OS / ENVIRONMENT
- GNU/Linux
##### STEPS TO REPRODUCE
`playbooks/site.yaml`:
```yaml
---
- hosts: "{{ target }}"
roles:
- role: my-role
```
`playbooks/dev.yaml`:
```yaml
---
- import_playbook: site.yml target='dev'
```
```sh
ansible-playbook playbooks/dev.yml
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook runs without warnings or errors.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbooks shows a Warning about this behavior going to fail in a future Ansible minor version update.
```sh
[WARNING]: Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14
```
|
https://github.com/ansible/ansible/issues/72745
|
https://github.com/ansible/ansible/pull/72987
|
dbc2c996ab361151fce8d1244f67413eb27aa50c
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
| 2020-11-27T22:24:01Z |
python
| 2020-12-17T19:14:58Z |
lib/ansible/modules/import_playbook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
author: Ansible Core Team (@ansible)
module: import_playbook
short_description: Import a playbook
description:
- Includes a file with a list of plays to be executed.
- Files with a list of plays can only be included at the top level.
- You cannot use this action inside a play.
version_added: "2.4"
options:
free-form:
description:
- The name of the imported playbook is specified directly without any other option.
notes:
- This is a core feature of Ansible, rather than a module, and cannot be overridden like a module.
seealso:
- module: ansible.builtin.import_role
- module: ansible.builtin.import_tasks
- module: ansible.builtin.include_role
- module: ansible.builtin.include_tasks
- ref: playbooks_reuse_includes
description: More information related to including and importing playbooks, roles and tasks.
'''
EXAMPLES = r'''
- hosts: localhost
tasks:
- debug:
msg: play1
- name: Include a play after another play
import_playbook: otherplays.yaml
- name: This DOES NOT WORK
hosts: all
tasks:
- debug:
msg: task1
- name: This fails because I'm inside a play already
import_playbook: stuff.yaml
'''
RETURN = r'''
# This module does not return anything except plays to execute.
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,745 |
Wrong deprecation for passing extra variables on 'import_playbook'
|
##### SUMMARY
The PR https://github.com/ansible/ansible/pull/64156 deprecated passing extra variables on `import_playbook`, which was wrongly stated to have never worked before, but it does, as shown in the example below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `import_playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/goetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[no output]
```
##### OS / ENVIRONMENT
- GNU/Linux
##### STEPS TO REPRODUCE
`playbooks/site.yaml`:
```yaml
---
- hosts: "{{ target }}"
roles:
- role: my-role
```
`playbooks/dev.yaml`:
```yaml
---
- import_playbook: site.yml target='dev'
```
```sh
ansible-playbook playbooks/dev.yml
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook runs without warnings or errors.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbooks shows a Warning about this behavior going to fail in a future Ansible minor version update.
```sh
[WARNING]: Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14
```
|
https://github.com/ansible/ansible/issues/72745
|
https://github.com/ansible/ansible/pull/72987
|
dbc2c996ab361151fce8d1244f67413eb27aa50c
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
| 2020-11-27T22:24:01Z |
python
| 2020-12-17T19:14:58Z |
lib/ansible/playbook/playbook_include.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import ansible.constants as C
from ansible.errors import AnsibleParserError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes
from ansible.module_utils.six import iteritems, string_types
from ansible.parsing.splitter import split_args, parse_kv
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.conditional import Conditional
from ansible.playbook.taggable import Taggable
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.collection_loader._collection_finder import _get_collection_name_from_path, _get_collection_playbook_path
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class PlaybookInclude(Base, Conditional, Taggable):
_import_playbook = FieldAttribute(isa='string')
_vars = FieldAttribute(isa='dict', default=dict)
@staticmethod
def load(data, basedir, variable_manager=None, loader=None):
return PlaybookInclude().load_data(ds=data, basedir=basedir, variable_manager=variable_manager, loader=loader)
def load_data(self, ds, basedir, variable_manager=None, loader=None):
'''
Overrides the base load_data(), as we're actually going to return a new
Playbook() object rather than a PlaybookInclude object
'''
# import here to avoid a dependency loop
from ansible.playbook import Playbook
from ansible.playbook.play import Play
# first, we use the original parent method to correctly load the object
# via the load_data/preprocess_data system we normally use for other
# playbook objects
new_obj = super(PlaybookInclude, self).load_data(ds, variable_manager, loader)
all_vars = self.vars.copy()
if variable_manager:
all_vars.update(variable_manager.get_vars())
templar = Templar(loader=loader, variables=all_vars)
# then we use the object to load a Playbook
pb = Playbook(loader=loader)
file_name = templar.template(new_obj.import_playbook)
# check for FQCN
resource = _get_collection_playbook_path(file_name)
if resource is not None:
playbook = resource[1]
playbook_collection = resource[2]
else:
# not FQCN try path
playbook = file_name
if not os.path.isabs(playbook):
playbook = os.path.join(basedir, playbook)
# might still be collection playbook
playbook_collection = _get_collection_name_from_path(playbook)
if playbook_collection:
# it is a collection playbook, setup default collections
AnsibleCollectionConfig.default_collection = playbook_collection
else:
# it is NOT a collection playbook, setup adjecent paths
AnsibleCollectionConfig.playbook_paths.append(os.path.dirname(os.path.abspath(to_bytes(playbook, errors='surrogate_or_strict'))))
pb._load_playbook_data(file_name=playbook, variable_manager=variable_manager, vars=self.vars.copy())
# finally, update each loaded playbook entry with any variables specified
# on the included playbook and/or any tags which may have been set
for entry in pb._entries:
# conditional includes on a playbook need a marker to skip gathering
if new_obj.when and isinstance(entry, Play):
entry._included_conditional = new_obj.when[:]
temp_vars = entry.vars.copy()
temp_vars.update(new_obj.vars)
param_tags = temp_vars.pop('tags', None)
if param_tags is not None:
entry.tags.extend(param_tags.split(','))
entry.vars = temp_vars
entry.tags = list(set(entry.tags).union(new_obj.tags))
if entry._included_path is None:
entry._included_path = os.path.dirname(playbook)
# Check to see if we need to forward the conditionals on to the included
# plays. If so, we can take a shortcut here and simply prepend them to
# those attached to each block (if any)
if new_obj.when:
for task_block in (entry.pre_tasks + entry.roles + entry.tasks + entry.post_tasks):
task_block._attributes['when'] = new_obj.when[:] + task_block.when[:]
return pb
def preprocess_data(self, ds):
'''
Regorganizes the data for a PlaybookInclude datastructure to line
up with what we expect the proper attributes to be
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds)))
# the new, cleaned datastructure, which will have legacy
# items reduced to a standard structure
new_ds = AnsibleMapping()
if isinstance(ds, AnsibleBaseYAMLObject):
new_ds.ansible_pos = ds.ansible_pos
for (k, v) in iteritems(ds):
if k in C._ACTION_ALL_IMPORT_PLAYBOOKS:
self._preprocess_import(ds, new_ds, k, v)
else:
# some basic error checking, to make sure vars are properly
# formatted and do not conflict with k=v parameters
if k == 'vars':
if 'vars' in new_ds:
raise AnsibleParserError("import_playbook parameters cannot be mixed with 'vars' entries for import statements", obj=ds)
elif not isinstance(v, dict):
raise AnsibleParserError("vars for import_playbook statements must be specified as a dictionary", obj=ds)
new_ds[k] = v
return super(PlaybookInclude, self).preprocess_data(new_ds)
def _preprocess_import(self, ds, new_ds, k, v):
'''
Splits the playbook import line up into filename and parameters
'''
if v is None:
raise AnsibleParserError("playbook import parameter is missing", obj=ds)
elif not isinstance(v, string_types):
raise AnsibleParserError("playbook import parameter must be a string indicating a file path, got %s instead" % type(v), obj=ds)
# The import_playbook line must include at least one item, which is the filename
# to import. Anything after that should be regarded as a parameter to the import
items = split_args(v)
if len(items) == 0:
raise AnsibleParserError("import_playbook statements must specify the file name to import", obj=ds)
else:
new_ds['import_playbook'] = items[0].strip()
if len(items) > 1:
display.warning('Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14')
# rejoin the parameter portion of the arguments and
# then use parse_kv() to get a dict of params back
params = parse_kv(" ".join(items[1:]))
if 'tags' in params:
new_ds['tags'] = params.pop('tags')
if 'vars' in new_ds:
raise AnsibleParserError("import_playbook parameters cannot be mixed with 'vars' entries for import statements", obj=ds)
new_ds['vars'] = params
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,745 |
Wrong deprecation for passing extra variables on 'import_playbook'
|
##### SUMMARY
The PR https://github.com/ansible/ansible/pull/64156 deprecated passing extra variables on `import_playbook`, which was wrongly stated to have never worked before, but it does, as shown in the example below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `import_playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/goetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[no output]
```
##### OS / ENVIRONMENT
- GNU/Linux
##### STEPS TO REPRODUCE
`playbooks/site.yaml`:
```yaml
---
- hosts: "{{ target }}"
roles:
- role: my-role
```
`playbooks/dev.yaml`:
```yaml
---
- import_playbook: site.yml target='dev'
```
```sh
ansible-playbook playbooks/dev.yml
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook runs without warnings or errors.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbooks shows a Warning about this behavior going to fail in a future Ansible minor version update.
```sh
[WARNING]: Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14
```
|
https://github.com/ansible/ansible/issues/72745
|
https://github.com/ansible/ansible/pull/72987
|
dbc2c996ab361151fce8d1244f67413eb27aa50c
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
| 2020-11-27T22:24:01Z |
python
| 2020-12-17T19:14:58Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(printf "%03d " {1..39}); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
test "$(ansible-playbook -i ../../inventory playbook/test_import_playbook.yml "$@" 2>&1 | grep -c '\[WARNING\]: Additional parameters in import_playbook')" = 1
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
# https://github.com/ansible/ansible/issues/68515
ansible-playbook -v role/test_include_role_vars_from.yml 2>&1 | tee test_include_role_vars_from.out
test "$(grep -E -c 'Expected a string for vars_from but got' test_include_role_vars_from.out)" = 1
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion_fqcn.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks_fqcn.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks_fqcn.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance_fqcn.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
ANSIBLE_STRATEGY='linear' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ANSIBLE_PLAYBOOK_VARS_ROOT=all ansible-playbook apply/include_apply_65710.yml -i inventory "$@"
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
# https://github.com/ansible/ansible/issues/66764
ANSIBLE_HOST_PATTERN_MISMATCH=error ansible-playbook empty_group_warning/playbook.yml
ansible-playbook test_include_loop.yml "$@"
ansible-playbook test_include_loop_fqcn.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,745 |
Wrong deprecation for passing extra variables on 'import_playbook'
|
##### SUMMARY
The PR https://github.com/ansible/ansible/pull/64156 deprecated passing extra variables on `import_playbook`, which was wrongly stated to have never worked before, but it does, as shown in the example below.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- `import_playbook`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/goetz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[no output]
```
##### OS / ENVIRONMENT
- GNU/Linux
##### STEPS TO REPRODUCE
`playbooks/site.yaml`:
```yaml
---
- hosts: "{{ target }}"
roles:
- role: my-role
```
`playbooks/dev.yaml`:
```yaml
---
- import_playbook: site.yml target='dev'
```
```sh
ansible-playbook playbooks/dev.yml
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Playbook runs without warnings or errors.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbooks shows a Warning about this behavior going to fail in a future Ansible minor version update.
```sh
[WARNING]: Additional parameters in import_playbook statements are not supported. This will be an error in version 2.14
```
|
https://github.com/ansible/ansible/issues/72745
|
https://github.com/ansible/ansible/pull/72987
|
dbc2c996ab361151fce8d1244f67413eb27aa50c
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
| 2020-11-27T22:24:01Z |
python
| 2020-12-17T19:14:58Z |
test/lib/ansible_test/_data/sanity/pylint/plugins/deprecated.py
|
# (c) 2018, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# -*- coding: utf-8 -*-
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import datetime
import re
from distutils.version import LooseVersion
import astroid
from pylint.interfaces import IAstroidChecker
from pylint.checkers import BaseChecker
from pylint.checkers.utils import check_messages
from ansible.module_utils.six import string_types
from ansible.release import __version__ as ansible_version_raw
from ansible.utils.version import SemanticVersion
MSGS = {
'E9501': ("Deprecated version (%r) found in call to Display.deprecated "
"or AnsibleModule.deprecate",
"ansible-deprecated-version",
"Used when a call to Display.deprecated specifies a version "
"less than or equal to the current version of Ansible",
{'minversion': (2, 6)}),
'E9502': ("Display.deprecated call without a version or date",
"ansible-deprecated-no-version",
"Used when a call to Display.deprecated does not specify a "
"version or date",
{'minversion': (2, 6)}),
'E9503': ("Invalid deprecated version (%r) found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"ansible-invalid-deprecated-version",
"Used when a call to Display.deprecated specifies an invalid "
"Ansible version number",
{'minversion': (2, 6)}),
'E9504': ("Deprecated version (%r) found in call to Display.deprecated "
"or AnsibleModule.deprecate",
"collection-deprecated-version",
"Used when a call to Display.deprecated specifies a collection "
"version less than or equal to the current version of this "
"collection",
{'minversion': (2, 6)}),
'E9505': ("Invalid deprecated version (%r) found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"collection-invalid-deprecated-version",
"Used when a call to Display.deprecated specifies an invalid "
"collection version number",
{'minversion': (2, 6)}),
'E9506': ("No collection name found in call to Display.deprecated or "
"AnsibleModule.deprecate",
"ansible-deprecated-no-collection-name",
"The current collection name in format `namespace.name` must "
"be provided as collection_name when calling Display.deprecated "
"or AnsibleModule.deprecate (`ansible.builtin` for ansible-core)",
{'minversion': (2, 6)}),
'E9507': ("Wrong collection name (%r) found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"wrong-collection-deprecated",
"The name of the current collection must be passed to the "
"Display.deprecated resp. AnsibleModule.deprecate calls "
"(`ansible.builtin` for ansible-core)",
{'minversion': (2, 6)}),
'E9508': ("Expired date (%r) found in call to Display.deprecated "
"or AnsibleModule.deprecate",
"ansible-deprecated-date",
"Used when a call to Display.deprecated specifies a date "
"before today",
{'minversion': (2, 6)}),
'E9509': ("Invalid deprecated date (%r) found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"ansible-invalid-deprecated-date",
"Used when a call to Display.deprecated specifies an invalid "
"date. It must be a string in format `YYYY-MM-DD` (ISO 8601)",
{'minversion': (2, 6)}),
'E9510': ("Both version and date found in call to "
"Display.deprecated or AnsibleModule.deprecate",
"ansible-deprecated-both-version-and-date",
"Only one of version and date must be specified",
{'minversion': (2, 6)}),
'E9511': ("Removal version (%r) must be a major release, not a minor or "
"patch release (see the specification at https://semver.org/)",
"removal-version-must-be-major",
"Used when a call to Display.deprecated or "
"AnsibleModule.deprecate for a collection specifies a version "
"which is not of the form x.0.0",
{'minversion': (2, 6)}),
}
ANSIBLE_VERSION = LooseVersion('.'.join(ansible_version_raw.split('.')[:3]))
def _get_expr_name(node):
"""Funciton to get either ``attrname`` or ``name`` from ``node.func.expr``
Created specifically for the case of ``display.deprecated`` or ``self._display.deprecated``
"""
try:
return node.func.expr.attrname
except AttributeError:
# If this fails too, we'll let it raise, the caller should catch it
return node.func.expr.name
def parse_isodate(value):
msg = 'Expected ISO 8601 date string (YYYY-MM-DD)'
if not isinstance(value, string_types):
raise ValueError(msg)
# From Python 3.7 in, there is datetime.date.fromisoformat(). For older versions,
# we have to do things manually.
if not re.match('^[0-9]{4}-[0-9]{2}-[0-9]{2}$', value):
raise ValueError(msg)
try:
return datetime.datetime.strptime(value, '%Y-%m-%d').date()
except ValueError:
raise ValueError(msg)
class AnsibleDeprecatedChecker(BaseChecker):
"""Checks for Display.deprecated calls to ensure that the ``version``
has not passed or met the time for removal
"""
__implements__ = (IAstroidChecker,)
name = 'deprecated'
msgs = MSGS
options = (
('collection-name', {
'default': None,
'type': 'string',
'metavar': '<name>',
'help': 'The collection\'s name used to check collection names in deprecations.',
}),
('collection-version', {
'default': None,
'type': 'string',
'metavar': '<version>',
'help': 'The collection\'s version number used to check deprecations.',
}),
)
def __init__(self, *args, **kwargs):
self.collection_version = None
self.collection_name = None
super(AnsibleDeprecatedChecker, self).__init__(*args, **kwargs)
def set_option(self, optname, value, action=None, optdict=None):
super(AnsibleDeprecatedChecker, self).set_option(optname, value, action, optdict)
if optname == 'collection-version' and value is not None:
self.collection_version = SemanticVersion(self.config.collection_version)
if optname == 'collection-name' and value is not None:
self.collection_name = self.config.collection_name
def _check_date(self, node, date):
if not isinstance(date, str):
self.add_message('invalid-date', node=node, args=(date,))
return
try:
date_parsed = parse_isodate(date)
except ValueError:
self.add_message('ansible-invalid-deprecated-date', node=node, args=(date,))
return
if date_parsed < datetime.date.today():
self.add_message('ansible-deprecated-date', node=node, args=(date,))
def _check_version(self, node, version, collection_name):
if not isinstance(version, (str, float)):
self.add_message('invalid-version', node=node, args=(version,))
return
version_no = str(version)
if collection_name == 'ansible.builtin':
# Ansible-base
try:
if not version_no:
raise ValueError('Version string should not be empty')
loose_version = LooseVersion(str(version_no))
if ANSIBLE_VERSION >= loose_version:
self.add_message('ansible-deprecated-version', node=node, args=(version,))
except ValueError:
self.add_message('ansible-invalid-deprecated-version', node=node, args=(version,))
elif collection_name:
# Collections
try:
if not version_no:
raise ValueError('Version string should not be empty')
semantic_version = SemanticVersion(version_no)
if collection_name == self.collection_name and self.collection_version is not None:
if self.collection_version >= semantic_version:
self.add_message('collection-deprecated-version', node=node, args=(version,))
if semantic_version.major != 0 and (semantic_version.minor != 0 or semantic_version.patch != 0):
self.add_message('removal-version-must-be-major', node=node, args=(version,))
except ValueError:
self.add_message('collection-invalid-deprecated-version', node=node, args=(version,))
@check_messages(*(MSGS.keys()))
def visit_call(self, node):
version = None
date = None
collection_name = None
try:
if (node.func.attrname == 'deprecated' and 'display' in _get_expr_name(node) or
node.func.attrname == 'deprecate' and _get_expr_name(node)):
if node.keywords:
for keyword in node.keywords:
if len(node.keywords) == 1 and keyword.arg is None:
# This is likely a **kwargs splat
return
if keyword.arg == 'version':
if isinstance(keyword.value.value, astroid.Name):
# This is likely a variable
return
version = keyword.value.value
if keyword.arg == 'date':
if isinstance(keyword.value.value, astroid.Name):
# This is likely a variable
return
date = keyword.value.value
if keyword.arg == 'collection_name':
if isinstance(keyword.value.value, astroid.Name):
# This is likely a variable
return
collection_name = keyword.value.value
if not version and not date:
try:
version = node.args[1].value
except IndexError:
self.add_message('ansible-deprecated-no-version', node=node)
return
if version and date:
self.add_message('ansible-deprecated-both-version-and-date', node=node)
if collection_name:
this_collection = collection_name == (self.collection_name or 'ansible.builtin')
if not this_collection:
self.add_message('wrong-collection-deprecated', node=node, args=(collection_name,))
else:
self.add_message('ansible-deprecated-no-collection-name', node=node)
if date:
self._check_date(node, date)
elif version:
self._check_version(node, version, collection_name)
except AttributeError:
# Not the type of node we are interested in
pass
def register(linter):
"""required method to auto register this checker """
linter.register_checker(AnsibleDeprecatedChecker(linter))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,006 |
include_role public=true defaults exposed to a play. Not among plays in a playbook.
|
##### SUMMARY
include_role parameter *public* description: "... vars and defaults are exposed to the playbook." The variables are not exposed to other plays in a playbook.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
devel
##### CONFIGURATION
No changes. All defaults.
##### OS / ENVIRONMENT
> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"
##### STEPS TO REPRODUCE
https://gist.github.com/vbotka/8a435e917bccc655e08cbd11b43eabd4
##### EXPECTED RESULTS
The documentation says "exposed to the playbook". This evokes the expectation the variables are available to other plays in the playbook.
```paste below
PLAY [two] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [two] => {
"variable_one": "Role One Variable"
}
PLAY [one] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [one] => {
"variable_one": "Role One Variable"
}
```
##### ACTUAL RESULTS
The variables are not available to other plays in the playbook.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [two] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [two] => {
"variable_one": "VARIABLE IS NOT DEFINED!"
}
PLAY [one] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [one] => {
"variable_one": "VARIABLE IS NOT DEFINED!"
}
```
|
https://github.com/ansible/ansible/issues/73006
|
https://github.com/ansible/ansible/pull/73011
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
|
13bf04e95a829e40b3f078b6956989bb080ceb2a
| 2020-12-17T11:58:54Z |
python
| 2020-12-17T19:28:16Z |
lib/ansible/modules/import_role.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
author: Ansible Core Team (@ansible)
module: import_role
short_description: Import a role into a play
description:
- Much like the C(roles:) keyword, this task loads a role, but it allows you to control when the role tasks run in
between other tasks of the play.
- Most keywords, loops and conditionals will only be applied to the imported tasks, not to this statement itself. If
you want the opposite behavior, use M(ansible.builtin.include_role) instead.
- Does not work in handlers.
version_added: '2.4'
options:
name:
description:
- The name of the role to be executed.
type: str
required: true
tasks_from:
description:
- File to load from a role's C(tasks/) directory.
type: str
default: main
vars_from:
description:
- File to load from a role's C(vars/) directory.
type: str
default: main
defaults_from:
description:
- File to load from a role's C(defaults/) directory.
type: str
default: main
allow_duplicates:
description:
- Overrides the role's metadata setting to allow using a role more than once with the same parameters.
type: bool
default: yes
handlers_from:
description:
- File to load from a role's C(handlers/) directory.
type: str
default: main
version_added: '2.8'
notes:
- Handlers are made available to the whole play.
- Since Ansible 2.7 variables defined in C(vars) and C(defaults) for the role are exposed at playbook parsing time.
Due to this, these variables will be accessible to roles and tasks executed before the location of the
M(ansible.builtin.import_role) task.
- Unlike M(ansible.builtin.include_role) variable exposure is not configurable, and will always be exposed.
seealso:
- module: ansible.builtin.import_playbook
- module: ansible.builtin.import_tasks
- module: ansible.builtin.include_role
- module: ansible.builtin.include_tasks
- ref: playbooks_reuse_includes
description: More information related to including and importing playbooks, roles and tasks.
'''
EXAMPLES = r'''
- hosts: all
tasks:
- import_role:
name: myrole
- name: Run tasks/other.yaml instead of 'main'
import_role:
name: myrole
tasks_from: other
- name: Pass variables to role
import_role:
name: myrole
vars:
rolevar1: value from task
- name: Apply condition to each task in role
import_role:
name: myrole
when: not idontwanttorun
'''
RETURN = r'''
# This module does not return anything except tasks to execute.
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,006 |
include_role public=true defaults exposed to a play. Not among plays in a playbook.
|
##### SUMMARY
include_role parameter *public* description: "... vars and defaults are exposed to the playbook." The variables are not exposed to other plays in a playbook.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
devel
##### CONFIGURATION
No changes. All defaults.
##### OS / ENVIRONMENT
> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"
##### STEPS TO REPRODUCE
https://gist.github.com/vbotka/8a435e917bccc655e08cbd11b43eabd4
##### EXPECTED RESULTS
The documentation says "exposed to the playbook". This evokes the expectation the variables are available to other plays in the playbook.
```paste below
PLAY [two] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [two] => {
"variable_one": "Role One Variable"
}
PLAY [one] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [one] => {
"variable_one": "Role One Variable"
}
```
##### ACTUAL RESULTS
The variables are not available to other plays in the playbook.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [two] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [two] => {
"variable_one": "VARIABLE IS NOT DEFINED!"
}
PLAY [one] ***********************************************************************************
TASK [role-two : debug] **********************************************************************
ok: [one] => {
"variable_one": "VARIABLE IS NOT DEFINED!"
}
```
|
https://github.com/ansible/ansible/issues/73006
|
https://github.com/ansible/ansible/pull/73011
|
8e022ef00a6476f9540c5775d7007ce1bca91f9d
|
13bf04e95a829e40b3f078b6956989bb080ceb2a
| 2020-12-17T11:58:54Z |
python
| 2020-12-17T19:28:16Z |
lib/ansible/modules/include_role.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
author: Ansible Core Team (@ansible)
module: include_role
short_description: Load and execute a role
description:
- Dynamically loads and executes a specified role as a task.
- May be used only where Ansible tasks are allowed - inside C(pre_tasks), C(tasks), or C(post_tasks) playbook objects, or as a task inside a role.
- Task-level keywords, loops, and conditionals apply only to the C(include_role) statement itself.
- To apply keywords to the tasks within the role, pass them using the C(apply) option or use M(ansible.builtin.import_role) instead.
- Ignores some keywords, like C(until) and C(retries).
- This module is also supported for Windows targets.
- Does not work in handlers.
version_added: "2.2"
options:
apply:
description:
- Accepts a hash of task keywords (e.g. C(tags), C(become)) that will be applied to all tasks within the included role.
version_added: '2.7'
name:
description:
- The name of the role to be executed.
type: str
required: True
tasks_from:
description:
- File to load from a role's C(tasks/) directory.
type: str
default: main
vars_from:
description:
- File to load from a role's C(vars/) directory.
type: str
default: main
defaults_from:
description:
- File to load from a role's C(defaults/) directory.
type: str
default: main
allow_duplicates:
description:
- Overrides the role's metadata setting to allow using a role more than once with the same parameters.
type: bool
default: yes
public:
description:
- This option dictates whether the role's C(vars) and C(defaults) are exposed to the playbook. If set to C(yes)
the variables will be available to tasks following the C(include_role) task. This functionality differs from
standard variable exposure for roles listed under the C(roles) header or C(import_role) as they are exposed at
playbook parsing time, and available to earlier roles and tasks as well.
type: bool
default: no
version_added: '2.7'
handlers_from:
description:
- File to load from a role's C(handlers/) directory.
type: str
default: main
version_added: '2.8'
notes:
- Handlers are made available to the whole play.
- Before Ansible 2.4, as with C(include), this task could be static or dynamic, If static, it implied that it won't
need templating, loops or conditionals and will show included tasks in the C(--list) options. Ansible would try to
autodetect what is needed, but you can set C(static) to C(yes) or C(no) at task level to control this.
- After Ansible 2.4, you can use M(ansible.builtin.import_role) for C(static) behaviour and this action for C(dynamic) one.
seealso:
- module: ansible.builtin.import_playbook
- module: ansible.builtin.import_role
- module: ansible.builtin.import_tasks
- module: ansible.builtin.include_tasks
- ref: playbooks_reuse_includes
description: More information related to including and importing playbooks, roles and tasks.
'''
EXAMPLES = r'''
- include_role:
name: myrole
- name: Run tasks/other.yaml instead of 'main'
include_role:
name: myrole
tasks_from: other
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
- name: Use role in loop
include_role:
name: '{{ roleinputvar }}'
loop:
- '{{ roleinput1 }}'
- '{{ roleinput2 }}'
loop_control:
loop_var: roleinputvar
- name: Conditional role
include_role:
name: myrole
when: not idontwanttorun
- name: Apply tags to tasks within included file
include_role:
name: install
apply:
tags:
- install
tags:
- always
'''
RETURN = r'''
# This module does not return anything except tasks to execute.
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,156 |
Ansible on Windows 10 WSL running Ubuntu 20
|
---
name: 📝 Documentation Report
about: Ask us about docs
---
##### SUMMARY
When running Ubuntu under windows 10 wsl you encounter odd timeout issues. This is due to a bug in the Ubuntu/WSL release where /usr/bin/sleep isn't returning correctly. This is obscure enough that calling it out in your install documentation may save you many issues being reported. This is for Ubuntu 20.04.x
There are a couple of workarounds, but looking through the issues trying to solve this I realized many actually go back to this issue.
The workaround that worked best for me is:
```
mv /usr/bin/sleep /usr/bin/sleep.dist
ln -s /bin/true /usr/bin/sleep
```
If running windows 10 later than build 2004 you may beable to change to wsl 2 to solve this:
```
wsl --set-default-version 2
```
I have seen some posts replacing /usr/bin/sleep with a python script but that didn't work for me in all instances. Something like this:
```
#!/usr/bin/env python3
import sys
import time
time.sleep(int(sys.argv[1]))
```
##### ANSIBLE VERSION
2.9.6 (actually all but this is the version I am using)
##### ISSUE TYPE
- Documentation Report
##### OS / ENVIRONMENT
windows 10 with Ubuntu running in WSL
##### COMPONENT NAME
ansible-playbook
|
https://github.com/ansible/ansible/issues/72156
|
https://github.com/ansible/ansible/pull/72839
|
c1dadfdadc68a6c9f0c3f9f8fe22a34b5290a45c
|
8fdb5ace01c096d5c9a8522168e479ef1b11ebcb
| 2020-10-08T18:54:42Z |
python
| 2020-12-18T16:49:56Z |
docs/docsite/rst/user_guide/windows_faq.rst
|
.. _windows_faq:
Windows Frequently Asked Questions
==================================
Here are some commonly asked questions in regards to Ansible and Windows and
their answers.
.. note:: This document covers questions about managing Microsoft Windows servers with Ansible.
For questions about Ansible Core, please see the
:ref:`general FAQ page <ansible_faq>`.
Does Ansible work with Windows XP or Server 2003?
``````````````````````````````````````````````````
Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
* Windows Server 2008 :sup:`1`
* Windows Server 2008 R2 :sup:`1`
* Windows Server 2012
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
* Windows 7 :sup:`1`
* Windows 8.1
* Windows 10
1 - See the :ref:`Server 2008 FAQ <windows_faq_server2008>` entry for more details.
Ansible also has minimum PowerShell version requirements - please see
:ref:`windows_setup` for the latest information.
.. _windows_faq_server2008:
Are Server 2008, 2008 R2 and Windows 7 supported?
`````````````````````````````````````````````````
Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
Can I manage Windows Nano Server with Ansible?
``````````````````````````````````````````````
Ansible does not currently work with Windows Nano Server, since it does
not have access to the full .NET Framework that is used by the majority of the
modules and internal components.
Can Ansible run on Windows?
```````````````````````````
No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host
natively, though it can run under the Windows Subsystem for Linux (WSL).
.. note:: The Windows Subsystem for Linux is not supported by Ansible and
should not be used for production systems.
To install Ansible on WSL, the following commands
can be run in the bash terminal:
.. code-block:: shell
sudo apt-get update
sudo apt-get install python-pip git libffi-dev libssl-dev -y
pip install --user ansible pywinrm
To run Ansible from source instead of a release on the WSL, simply uninstall the pip
installed version and then clone the git repo.
.. code-block:: shell
pip uninstall ansible -y
git clone https://github.com/ansible/ansible.git
source ansible/hacking/env-setup
# To enable Ansible on login, run the following
echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
Can I use SSH keys to authenticate to Windows hosts?
````````````````````````````````````````````````````
You cannot use SSH keys with the WinRM or PSRP connection plugins.
These connection plugins use X509 certificates for authentication instead
of the SSH key pairs that SSH uses.
The way X509 certificates are generated and mapped to a user is different
from the SSH implementation; consult the :ref:`windows_winrm` documentation for
more information.
Ansible 2.8 has added an experimental option to use the SSH connection plugin,
which uses SSH keys for authentication, for Windows servers. See :ref:`this question <windows_faq_ssh>`
for more information.
.. _windows_faq_winrm:
Why can I run a command locally that does not work under Ansible?
`````````````````````````````````````````````````````````````````
Ansible executes commands through WinRM. These processes are different from
running a command locally in these ways:
* Unless using an authentication option like CredSSP or Kerberos with
credential delegation, the WinRM process does not have the ability to
delegate the user's credentials to a network resource, causing ``Access is
Denied`` errors.
* All processes run under WinRM are in a non-interactive session. Applications
that require an interactive session will not work.
* When running through WinRM, Windows restricts access to internal Windows
APIs like the Windows Update API and DPAPI, which some installers and
programs rely on.
Some ways to bypass these restrictions are to:
* Use ``become``, which runs a command as it would when run locally. This will
bypass most WinRM restrictions, as Windows is unaware the process is running
under WinRM when ``become`` is used. See the :ref:`become` documentation for more
information.
* Use a scheduled task, which can be created with ``win_scheduled_task``. Like
``become``, it will bypass all WinRM restrictions, but it can only be used to run
commands, not modules.
* Use ``win_psexec`` to run a command on the host. PSExec does not use WinRM
and so will bypass any of the restrictions.
* To access network resources without any of these workarounds, you can use
CredSSP or Kerberos with credential delegation enabled.
See :ref:`become` more info on how to use become. The limitations section at
:ref:`windows_winrm` has more details around WinRM limitations.
This program won't install on Windows with Ansible
``````````````````````````````````````````````````
See :ref:`this question <windows_faq_winrm>` for more information about WinRM limitations.
What Windows modules are available?
```````````````````````````````````
Most of the Ansible modules in Ansible Core are written for a combination of
Linux/Unix machines and arbitrary web services. These modules are written in
Python and most of them do not work on Windows.
Because of this, there are dedicated Windows modules that are written in
PowerShell and are meant to be run on Windows hosts. A list of these modules
can be found :ref:`here <windows_modules>`.
In addition, the following Ansible Core modules/action-plugins work with Windows:
* add_host
* assert
* async_status
* debug
* fail
* fetch
* group_by
* include
* include_role
* include_vars
* meta
* pause
* raw
* script
* set_fact
* set_stats
* setup
* slurp
* template (also: win_template)
* wait_for_connection
Can I run Python modules on Windows hosts?
``````````````````````````````````````````
No, the WinRM connection protocol is set to use PowerShell modules, so Python
modules will not work. A way to bypass this issue to use
``delegate_to: localhost`` to run a Python module on the Ansible controller.
This is useful if during a playbook, an external service needs to be contacted
and there is no equivalent Windows module available.
.. _windows_faq_ssh:
Can I connect to Windows hosts over SSH?
````````````````````````````````````````
Ansible 2.8 has added an experimental option to use the SSH connection plugin
to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
fork that is in development with Microsoft on
the Windows host(s). While most of the basics should work with SSH,
``Win32-OpenSSH`` is rapidly changing, with new features added and bugs
fixed in every release. It is highly recommend you `install <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ the latest release
of ``Win32-OpenSSH`` from the GitHub Releases page when using it with Ansible
on Windows hosts.
To use SSH as the connection to a Windows host, set the following variables in
the inventory::
ansible_connection=ssh
# Set either cmd or powershell not both
ansible_shell_type=cmd
# ansible_shell_type=powershell
The value for ``ansible_shell_type`` should either be ``cmd`` or ``powershell``.
Use ``cmd`` if the ``DefaultShell`` has not been configured on the SSH service
and ``powershell`` if that has been set as the ``DefaultShell``.
Why is connecting to a Windows host via SSH failing?
````````````````````````````````````````````````````
Unless you are using ``Win32-OpenSSH`` as described above, you must connect to
Windows hosts using :ref:`windows_winrm`. If your Ansible output indicates that
SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
Make sure ``ansible_connection: winrm`` is set in the inventory for the Windows
host(s).
Why are my credentials being rejected?
``````````````````````````````````````
This can be due to a myriad of reasons unrelated to incorrect credentials.
See HTTP 401/Credentials Rejected at :ref:`windows_setup` for a more detailed
guide of this could mean.
Why am I getting an error SSL CERTIFICATE_VERIFY_FAILED?
````````````````````````````````````````````````````````
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that
has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to
validate the certificate WinRM is using for an HTTPS connection. If the
certificate cannot be validated (such as in the case of a self signed cert), it will
fail the verification process.
To ignore certificate validation, add
``ansible_winrm_server_cert_validation: ignore`` to inventory for the Windows
host.
.. seealso::
:ref:`windows`
The Windows documentation index
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
get_unsupported_parameters,
get_type_validator,
handle_aliases,
list_deprecations,
list_no_log_values,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more.
Use of deferred_removals exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see issue #24560).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
""" Sanitize the keys in a container object by removing no_log values from key names.
This is a companion function to the `remove_values()` function. Similar to that function,
we make use of deferred_removals to avoid hitting maximum recursion depth in cases of
large data structures.
:param obj: The container object to sanitize. Non-container objects are returned unmodified.
:param no_log_strings: A set of string values we do not want logged.
:param ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._set_internal_properties()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
version=deprecation.get('version'), date=deprecation.get('date'),
collection_name=deprecation.get('collection_name'))
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], version=message.get('version'), date=message.get('date'),
collection_name=message.get('collection_name'))
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
unsupported_parameters = get_unsupported_parameters(spec, param, legal_inputs)
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
supported_parameters = list()
for key in sorted(spec.keys()):
if 'aliases' in spec[key] and spec[key]['aliases']:
supported_parameters.append("%s (%s)" % (key, ', '.join(sorted(spec[key]['aliases']))))
else:
supported_parameters.append(key)
msg += " Supported parameters include: %s" % (', '.join(supported_parameters))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value, param=None, prefix=''):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
from_msg = '{0!r}'.format(value)
to_msg = '{0!r}'.format(to_text(value))
if param is not None:
if prefix:
param = '{0}{1}'.format(prefix, param)
from_msg = '{0}: {1!r}'.format(param, value)
to_msg = '{0}: {1!r}'.format(param, to_text(value))
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value "{0}" (type {1.__class__.__name__}) was converted to "{2}" (type string). '
'If this does not look like what you expect, {3}').format(from_msg, value, to_msg, common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._set_internal_properties(spec, param)
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param, new_prefix)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
# Use the private method for 'str' type to handle the string conversion warning.
if wanted == 'str':
type_checker, wanted = self._check_type_str, 'str'
else:
type_checker, wanted = get_type_validator(wanted)
if type_checker is None:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted, string_types):
if isinstance(param, string_types):
kwargs['param'] = param
elif isinstance(param, dict):
kwargs['param'] = list(param.keys())[0]
for value in values:
try:
validated_params.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None, prefix=''):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(type_checker, string_types):
kwargs['param'] = list(param.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
param[k] = type_checker(value, **kwargs)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
path = os.environ.get('PATH', '')
old_env_vals['PATH'] = path
if path:
os.environ['PATH'] = "%s:%s" % (path_prefix, path)
else:
os.environ['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
lib/ansible/modules/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
type: str
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead',
version='2.13', collection_name='ansible.builtin')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead',
version='2.14', collection_name='ansible.builtin')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map.append((parts[0], parts[1]))
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/basic.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,981 |
get_url does not respect unsafe_writes
|
##### SUMMARY
`get_url` inherits from `add_file_common_args`, which includes `unsafe_writes`, however our call to `atomic_move` does not pass this argument along.
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L463
https://github.com/ansible/ansible/blob/5226ac5778d3b57296b925de5d4ad0b485bb11cd/lib/ansible/modules/get_url.py#L633
```diff
diff --git a/lib/ansible/modules/get_url.py b/lib/ansible/modules/get_url.py
index 6ac370c2ef..c29d69e5c9 100644
--- a/lib/ansible/modules/get_url.py
+++ b/lib/ansible/modules/get_url.py
@@ -630,7 +630,7 @@ def main():
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
- module.atomic_move(tmpsrc, dest)
+ module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/get_url.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
2.10
2.11
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/72981
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-12-15T17:51:02Z |
python
| 2020-12-21T16:20:52Z |
test/units/module_utils/basic/test_atomic_move.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import errno
import json
from itertools import product
import pytest
from ansible.module_utils import basic
@pytest.fixture
def atomic_am(am, mocker):
am.selinux_enabled = mocker.MagicMock()
am.selinux_context = mocker.MagicMock()
am.selinux_default_context = mocker.MagicMock()
am.set_context_if_different = mocker.MagicMock()
yield am
@pytest.fixture
def atomic_mocks(mocker, monkeypatch):
environ = dict()
mocks = {
'chmod': mocker.patch('os.chmod'),
'chown': mocker.patch('os.chown'),
'close': mocker.patch('os.close'),
'environ': mocker.patch('os.environ', environ),
'getlogin': mocker.patch('os.getlogin'),
'getuid': mocker.patch('os.getuid'),
'path_exists': mocker.patch('os.path.exists'),
'rename': mocker.patch('os.rename'),
'stat': mocker.patch('os.stat'),
'umask': mocker.patch('os.umask'),
'getpwuid': mocker.patch('pwd.getpwuid'),
'copy2': mocker.patch('shutil.copy2'),
'copyfileobj': mocker.patch('shutil.copyfileobj'),
'move': mocker.patch('shutil.move'),
'mkstemp': mocker.patch('tempfile.mkstemp'),
}
mocks['getlogin'].return_value = 'root'
mocks['getuid'].return_value = 0
mocks['getpwuid'].return_value = ('root', '', 0, 0, '', '', '')
mocks['umask'].side_effect = [18, 0]
mocks['rename'].return_value = None
# normalize OS specific features
monkeypatch.delattr(os, 'chflags', raising=False)
yield mocks
@pytest.fixture
def fake_stat(mocker):
stat1 = mocker.MagicMock()
stat1.st_mode = 0o0644
stat1.st_uid = 0
stat1.st_gid = 0
stat1.st_flags = 0
yield stat1
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_new_file(atomic_am, atomic_mocks, mocker, selinux):
# test destination does not exist, login name = 'root', no environment, os.rename() succeeds
mock_context = atomic_am.selinux_default_context.return_value
atomic_mocks['path_exists'].return_value = False
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.selinux_default_context.call_args_list == [mocker.call('/path/to/dest')]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_existing_file(atomic_am, atomic_mocks, fake_stat, mocker, selinux):
# Test destination already present
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_no_tty_fallback(atomic_am, atomic_mocks, fake_stat, mocker):
"""Raise OSError when using getlogin() to simulate no tty cornercase"""
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = True
atomic_mocks['getlogin'].side_effect = OSError()
atomic_mocks['environ']['LOGNAME'] = 'root'
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_existing_file_stat_failure(atomic_am, atomic_mocks, mocker):
"""Failure to stat an existing file in order to copy permissions propogates the error (unless EPERM)"""
atomic_mocks['stat'].side_effect = OSError()
atomic_mocks['path_exists'].return_value = True
with pytest.raises(OSError):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_existing_file_stat_perms_failure(atomic_am, atomic_mocks, mocker):
"""Failure to stat an existing file to copy the permissions due to permissions passes fine"""
# and now have os.stat return EPERM, which should not fail
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].side_effect = OSError(errno.EPERM, 'testing os stat with EPERM')
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = True
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
# FIXME: Should atomic_move() set a default permission value when it cannot retrieve the
# existing file's permissions? (Right now it's up to the calling code.
# assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_rename_failure(atomic_am, atomic_mocks, mocker, capfd):
"""Test os.rename fails with EIO, causing it to bail out"""
atomic_mocks['path_exists'].side_effect = [False, False]
atomic_mocks['rename'].side_effect = OSError(errno.EIO, 'failing with EIO')
with pytest.raises(SystemExit):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
out, err = capfd.readouterr()
results = json.loads(out)
assert 'Could not replace file' in results['msg']
assert 'failing with EIO' in results['msg']
assert results['failed']
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_rename_perms_fail_temp_creation_fails(atomic_am, atomic_mocks, mocker, capfd):
"""Test os.rename fails with EPERM working but failure in mkstemp"""
atomic_mocks['path_exists'].return_value = False
atomic_mocks['close'].return_value = None
atomic_mocks['rename'].side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
atomic_mocks['mkstemp'].return_value = None
atomic_mocks['mkstemp'].side_effect = OSError()
atomic_am.selinux_enabled.return_value = False
with pytest.raises(SystemExit):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
out, err = capfd.readouterr()
results = json.loads(out)
assert 'is not writable by the current user' in results['msg']
assert results['failed']
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_rename_perms_fail_temp_succeeds(atomic_am, atomic_mocks, fake_stat, mocker, selinux):
"""Test os.rename raising an error but fallback to using mkstemp works"""
mock_context = atomic_am.selinux_default_context.return_value
atomic_mocks['path_exists'].return_value = False
atomic_mocks['rename'].side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['stat'].side_effect = None
atomic_mocks['mkstemp'].return_value = (None, '/path/to/tempfile')
atomic_mocks['mkstemp'].side_effect = None
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
assert atomic_mocks['rename'].call_args_list == [mocker.call(b'/path/to/src', b'/path/to/dest'),
mocker.call(b'/path/to/tempfile', b'/path/to/dest')]
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.selinux_default_context.call_args_list == [mocker.call('/path/to/dest')]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call(b'/path/to/tempfile', mock_context, False),
mocker.call('/path/to/dest', mock_context, False)]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
get_unsupported_parameters,
get_type_validator,
handle_aliases,
list_deprecations,
list_no_log_values,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more.
Use of deferred_removals exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see issue #24560).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
""" Sanitize the keys in a container object by removing no_log values from key names.
This is a companion function to the `remove_values()` function. Similar to that function,
we make use of deferred_removals to avoid hitting maximum recursion depth in cases of
large data structures.
:param obj: The container object to sanitize. Non-container objects are returned unmodified.
:param no_log_strings: A set of string values we do not want logged.
:param ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._set_internal_properties()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
version=deprecation.get('version'), date=deprecation.get('date'),
collection_name=deprecation.get('collection_name'))
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], version=message.get('version'), date=message.get('date'),
collection_name=message.get('collection_name'))
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
unsupported_parameters = get_unsupported_parameters(spec, param, legal_inputs)
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
supported_parameters = list()
for key in sorted(spec.keys()):
if 'aliases' in spec[key] and spec[key]['aliases']:
supported_parameters.append("%s (%s)" % (key, ', '.join(sorted(spec[key]['aliases']))))
else:
supported_parameters.append(key)
msg += " Supported parameters include: %s" % (', '.join(supported_parameters))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value, param=None, prefix=''):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
from_msg = '{0!r}'.format(value)
to_msg = '{0!r}'.format(to_text(value))
if param is not None:
if prefix:
param = '{0}{1}'.format(prefix, param)
from_msg = '{0}: {1!r}'.format(param, value)
to_msg = '{0}: {1!r}'.format(param, to_text(value))
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value "{0}" (type {1.__class__.__name__}) was converted to "{2}" (type string). '
'If this does not look like what you expect, {3}').format(from_msg, value, to_msg, common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._set_internal_properties(spec, param)
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param, new_prefix)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
# Use the private method for 'str' type to handle the string conversion warning.
if wanted == 'str':
type_checker, wanted = self._check_type_str, 'str'
else:
type_checker, wanted = get_type_validator(wanted)
if type_checker is None:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted, string_types):
if isinstance(param, string_types):
kwargs['param'] = param
elif isinstance(param, dict):
kwargs['param'] = list(param.keys())[0]
for value in values:
try:
validated_params.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None, prefix=''):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(type_checker, string_types):
kwargs['param'] = list(param.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
param[k] = type_checker(value, **kwargs)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
path = os.environ.get('PATH', '')
old_env_vals['PATH'] = path
if path:
os.environ['PATH'] = "%s:%s" % (path_prefix, path)
else:
os.environ['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
lib/ansible/modules/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
type: str
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead',
version='2.13', collection_name='ansible.builtin')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead',
version='2.14', collection_name='ansible.builtin')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map.append((parts[0], parts[1]))
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/basic.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
test/integration/targets/unsafe_writes/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,535 |
Copy fails on Proxmox VE's CFS filesystem and unsafe_writes does not work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I cannot seem to be able to copy a file into the `/etc/pve/` directory of a Proxmox VE server. This is a FUSE filesystem implemented in `/usr/bin/pmxcfs` Proxmox Cluster File System aka. CFS.
CFS has a peculiarity in that it does not allow the `chmod` system call, returning Errno 1: Operation not permitted.
The problem is that I cannod find a way to prevent `copy` from issuing a chmod on the target file (even if I set `mode: preserve`) nor make it use a direct write (even if I set `unsafe_writes: yes`), instead it always terminates the task with an "Operation not permitted" error.
I think if `unsafe_writes` had a "force" value, this would be a non-issue. Maybe https://github.com/ansible/ansible/issues/24449 could be reopened?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tobia/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(none)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Proxmox PVE 6.1, based on Debian 10 Buster.
The package pve-cluster which contains the CFS filesystem implementation is at version 6.1-8, but Ansible has always had this issue with all versions of CFS.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to create a file in the CFS filesystem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: HOSTNAME_REDACTED
become: true
tasks:
- name: Test file in CFS filesystem
copy:
dest: /etc/pve/local/test
content: Hello.
unsafe_writes: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
With `unsafe_writes` enabled, I would expect the test file to be created, because both Bash and Vim can do it with no trouble:
```
# echo World > /etc/pve/local/test
```
and
```
# vim /etc/pve/local/test
```
both work.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
For some reason, `copy` does not even try to perform a direct write. It always goes to the `shutil.copy2()` route, which will invariably fail on the `os.chmod()` call.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Test file in CFS filesystem] ******************************************************************************************************************************************
task path: /home/tobia/proj/ansible/test.yaml:4
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<IP_REDACTED> (0, b'/home/tobia\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" && echo ansible-tmp-1594285570.2494516-168830778907510="` echo /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510 `" ) && sleep 0'"'"''
<IP_REDACTED> (0, b'ansible-tmp-1594285570.2494516-168830778907510=/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/stat.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpxsjb4aw6\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:10345\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 10345 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-bcosprpmvlvgctstkbcyztnsdploapgs ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_stat.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (0, b'\r\n{"invocation": {"module_args": {"checksum_algorithm": "sha1", "get_checksum": true, "follow": false, "path": "/etc/pve/local/test", "get_md5": false, "get_mime": true, "get_attributes": true}}, "stat": {"exists": false}, "changed": false}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpw3tfcrhr\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:6\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 6 bytes at 0\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
Using module file /usr/lib/python3/dist-packages/ansible/modules/files/copy.py
<IP_REDACTED> PUT /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 TO /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py
<IP_REDACTED> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b '[IP_REDACTED]'
<IP_REDACTED> (0, b'sftp> put /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59 /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug2: Remote version: 3\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 2\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug2: Server supports extension "[email protected]" revision 1\r\ndebug3: Sent message fd 3 T:16 I:1\r\ndebug3: SSH_FXP_REALPATH . -> /home/tobia size 0\r\ndebug3: Looking up /home/tobia/.ansible/tmp/ansible-local-979549tzjkfng1/tmpn5cste59\r\ndebug3: Sent message fd 3 T:17 I:2\r\ndebug3: Received stat reply T:101 I:2\r\ndebug1: Couldn\'t stat remote file: No such file or directory\r\ndebug3: Sent message SSH2_FXP_OPEN I:3 P:/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py\r\ndebug3: Sent message SSH2_FXP_WRITE I:4 O:0 S:32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 4 32768 bytes at 0\r\ndebug3: Sent message SSH2_FXP_WRITE I:5 O:32768 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:6 O:65536 S:32768\r\ndebug3: Sent message SSH2_FXP_WRITE I:7 O:98304 S:14834\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 5 32768 bytes at 32768\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 6 32768 bytes at 65536\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: In write loop, ack for 7 14834 bytes at 98304\r\ndebug3: Sent message SSH2_FXP_CLOSE I:4\r\ndebug3: SSH2_FXP_STATUS 0\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'chmod u+x /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b -tt IP_REDACTED '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-nfbvibtaerdhzqnygptsdnqyifxvplid ; /usr/bin/python /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/AnsiballZ_copy.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<IP_REDACTED> (1, b'\r\n{"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'", "failed": true, "exception": "Traceback (most recent call last):\\n File \\"/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py\\", line 2299, in atomic_move\\n shutil.copy2(b_src, b_tmp_dest_name)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 154, in copy2\\n copystat(src, dst)\\n File \\"/usr/lib/python2.7/shutil.py\\", line 120, in copystat\\n os.chmod(dst, mode)\\nOSError: [Errno 1] Operation not permitted: \'/etc/pve/local/.ansible_tmp7FFX7ytest\'\\n", "invocation": {"module_args": {"directory_mode": null, "force": true, "remote_src": null, "_original_basename": "tmpw3tfcrhr", "owner": null, "follow": false, "local_follow": null, "group": null, "unsafe_writes": true, "setype": null, "content": null, "serole": null, "dest": "/etc/pve/local/test", "selevel": null, "regexp": null, "validate": null, "src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source", "checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731", "seuser": null, "delimiter": null, "mode": null, "attributes": null, "backup": false}}}\r\n', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\nShared connection to IP_REDACTED closed.\r\n')
<IP_REDACTED> Failed to connect to the host via ssh: OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/tobia/.ssh/config
debug1: /home/tobia/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname IP_REDACTED is address
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 979558
debug3: mux_client_request_session: session request sent
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
Shared connection to IP_REDACTED closed.
<IP_REDACTED> ESTABLISH SSH CONNECTION FOR USER: None
<IP_REDACTED> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=222 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/tobia/.ansible/cp/0b6228f93b IP_REDACTED '/bin/sh -c '"'"'rm -f -r /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/ > /dev/null 2>&1 && sleep 0'"'"''
<IP_REDACTED> (0, b'', b'OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020\r\ndebug1: Reading configuration data /home/tobia/.ssh/config\r\ndebug1: /home/tobia/.ssh/config line 1: Applying options for *\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files\r\ndebug1: /etc/ssh/ssh_config line 21: Applying options for *\r\ndebug2: resolve_canonicalize: hostname IP_REDACTED is address\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 979558\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_copy_payload_xkp36J/ansible_copy_payload.zip/ansible/module_utils/basic.py", line 2299, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib/python2.7/shutil.py", line 154, in copy2
copystat(src, dst)
File "/usr/lib/python2.7/shutil.py", line 120, in copystat
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'
fatal: [HOSTNAME_REDACTED]: FAILED! => {
"changed": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"diff": [],
"invocation": {
"module_args": {
"_original_basename": "tmpw3tfcrhr",
"attributes": null,
"backup": false,
"checksum": "9b56d519ccd9e1e5b2a725e186184cdc68de0731",
"content": null,
"delimiter": null,
"dest": "/etc/pve/local/test",
"directory_mode": null,
"follow": false,
"force": true,
"group": null,
"local_follow": null,
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": "/home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source",
"unsafe_writes": true,
"validate": null
}
},
"msg": "Failed to replace file: /home/tobia/.ansible/tmp/ansible-tmp-1594285570.2494516-168830778907510/source to /etc/pve/local/test: [Errno 1] Operation not permitted: '/etc/pve/local/.ansible_tmp7FFX7ytest'"
}
```
|
https://github.com/ansible/ansible/issues/70535
|
https://github.com/ansible/ansible/pull/70722
|
202689b1c0560b68a93e93d0a250ea186a8e3e1a
|
932ba3616067007fd5e449611a34e7e3837fc8ae
| 2020-07-09T09:10:49Z |
python
| 2020-12-21T16:20:52Z |
test/units/module_utils/basic/test_atomic_move.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import os
import errno
import json
from itertools import product
import pytest
from ansible.module_utils import basic
@pytest.fixture
def atomic_am(am, mocker):
am.selinux_enabled = mocker.MagicMock()
am.selinux_context = mocker.MagicMock()
am.selinux_default_context = mocker.MagicMock()
am.set_context_if_different = mocker.MagicMock()
yield am
@pytest.fixture
def atomic_mocks(mocker, monkeypatch):
environ = dict()
mocks = {
'chmod': mocker.patch('os.chmod'),
'chown': mocker.patch('os.chown'),
'close': mocker.patch('os.close'),
'environ': mocker.patch('os.environ', environ),
'getlogin': mocker.patch('os.getlogin'),
'getuid': mocker.patch('os.getuid'),
'path_exists': mocker.patch('os.path.exists'),
'rename': mocker.patch('os.rename'),
'stat': mocker.patch('os.stat'),
'umask': mocker.patch('os.umask'),
'getpwuid': mocker.patch('pwd.getpwuid'),
'copy2': mocker.patch('shutil.copy2'),
'copyfileobj': mocker.patch('shutil.copyfileobj'),
'move': mocker.patch('shutil.move'),
'mkstemp': mocker.patch('tempfile.mkstemp'),
}
mocks['getlogin'].return_value = 'root'
mocks['getuid'].return_value = 0
mocks['getpwuid'].return_value = ('root', '', 0, 0, '', '', '')
mocks['umask'].side_effect = [18, 0]
mocks['rename'].return_value = None
# normalize OS specific features
monkeypatch.delattr(os, 'chflags', raising=False)
yield mocks
@pytest.fixture
def fake_stat(mocker):
stat1 = mocker.MagicMock()
stat1.st_mode = 0o0644
stat1.st_uid = 0
stat1.st_gid = 0
stat1.st_flags = 0
yield stat1
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_new_file(atomic_am, atomic_mocks, mocker, selinux):
# test destination does not exist, login name = 'root', no environment, os.rename() succeeds
mock_context = atomic_am.selinux_default_context.return_value
atomic_mocks['path_exists'].return_value = False
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.selinux_default_context.call_args_list == [mocker.call('/path/to/dest')]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_existing_file(atomic_am, atomic_mocks, fake_stat, mocker, selinux):
# Test destination already present
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_no_tty_fallback(atomic_am, atomic_mocks, fake_stat, mocker):
"""Raise OSError when using getlogin() to simulate no tty cornercase"""
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = True
atomic_mocks['getlogin'].side_effect = OSError()
atomic_mocks['environ']['LOGNAME'] = 'root'
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_existing_file_stat_failure(atomic_am, atomic_mocks, mocker):
"""Failure to stat an existing file in order to copy permissions propogates the error (unless EPERM)"""
atomic_mocks['stat'].side_effect = OSError()
atomic_mocks['path_exists'].return_value = True
with pytest.raises(OSError):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_existing_file_stat_perms_failure(atomic_am, atomic_mocks, mocker):
"""Failure to stat an existing file to copy the permissions due to permissions passes fine"""
# and now have os.stat return EPERM, which should not fail
mock_context = atomic_am.selinux_context.return_value
atomic_mocks['stat'].side_effect = OSError(errno.EPERM, 'testing os stat with EPERM')
atomic_mocks['path_exists'].return_value = True
atomic_am.selinux_enabled.return_value = True
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
atomic_mocks['rename'].assert_called_with(b'/path/to/src', b'/path/to/dest')
# FIXME: Should atomic_move() set a default permission value when it cannot retrieve the
# existing file's permissions? (Right now it's up to the calling code.
# assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/src', basic.DEFAULT_PERM & ~18)]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call('/path/to/dest', mock_context, False)]
assert atomic_am.selinux_context.call_args_list == [mocker.call('/path/to/dest')]
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_rename_failure(atomic_am, atomic_mocks, mocker, capfd):
"""Test os.rename fails with EIO, causing it to bail out"""
atomic_mocks['path_exists'].side_effect = [False, False]
atomic_mocks['rename'].side_effect = OSError(errno.EIO, 'failing with EIO')
with pytest.raises(SystemExit):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
out, err = capfd.readouterr()
results = json.loads(out)
assert 'Could not replace file' in results['msg']
assert 'failing with EIO' in results['msg']
assert results['failed']
@pytest.mark.parametrize('stdin', [{}], indirect=['stdin'])
def test_rename_perms_fail_temp_creation_fails(atomic_am, atomic_mocks, mocker, capfd):
"""Test os.rename fails with EPERM working but failure in mkstemp"""
atomic_mocks['path_exists'].return_value = False
atomic_mocks['close'].return_value = None
atomic_mocks['rename'].side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
atomic_mocks['mkstemp'].return_value = None
atomic_mocks['mkstemp'].side_effect = OSError()
atomic_am.selinux_enabled.return_value = False
with pytest.raises(SystemExit):
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
out, err = capfd.readouterr()
results = json.loads(out)
assert 'is not writable by the current user' in results['msg']
assert results['failed']
@pytest.mark.parametrize('stdin, selinux', product([{}], (True, False)), indirect=['stdin'])
def test_rename_perms_fail_temp_succeeds(atomic_am, atomic_mocks, fake_stat, mocker, selinux):
"""Test os.rename raising an error but fallback to using mkstemp works"""
mock_context = atomic_am.selinux_default_context.return_value
atomic_mocks['path_exists'].return_value = False
atomic_mocks['rename'].side_effect = [OSError(errno.EPERM, 'failing with EPERM'), None]
atomic_mocks['stat'].return_value = fake_stat
atomic_mocks['stat'].side_effect = None
atomic_mocks['mkstemp'].return_value = (None, '/path/to/tempfile')
atomic_mocks['mkstemp'].side_effect = None
atomic_am.selinux_enabled.return_value = selinux
atomic_am.atomic_move('/path/to/src', '/path/to/dest')
assert atomic_mocks['rename'].call_args_list == [mocker.call(b'/path/to/src', b'/path/to/dest'),
mocker.call(b'/path/to/tempfile', b'/path/to/dest')]
assert atomic_mocks['chmod'].call_args_list == [mocker.call(b'/path/to/dest', basic.DEFAULT_PERM & ~18)]
if selinux:
assert atomic_am.selinux_default_context.call_args_list == [mocker.call('/path/to/dest')]
assert atomic_am.set_context_if_different.call_args_list == [mocker.call(b'/path/to/tempfile', mock_context, False),
mocker.call('/path/to/dest', mock_context, False)]
else:
assert not atomic_am.selinux_default_context.called
assert not atomic_am.set_context_if_different.called
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,887 |
ansible-galaxy crashes with invalid token file
|
##### SUMMARY
I had a galaxy-token file which wasn't valid YAML and thus `ansible-galaxy` when crashing when installing a collection. While it's fine that it's failing, it should provide some details about why it failed.
@jborean93
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.10
config file = /Users/kbreit/Documents/Programming/config_check/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS 10.15.5
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to install a module using `ansible-galaxy collection install` with an invalid syntax'd YAML file for your ~/.ansible/galaxy_token`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
config_check [kbreit/diff●●] % ansible-galaxy collection install cisco.nxos -vvv
ansible-galaxy 2.9.10
config file = /Users/kbreit/Documents/Programming/config_check/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
Using /Users/kbreit/Documents/Programming/config_check/ansible.cfg as config file
Process install dependency map
Opened /Users/kbreit/.ansible/galaxy_token
Processing requirement collection 'cisco.nxos'
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'get'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 836, in execute_install
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 457, in install_collections
dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 821, in _build_dependency_map
_get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", l 1 token: e2167b20b5f989bd113fedbefcc345261e9eee8b
ine 894, in _get_collection_info
collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 346, in from_name
resp = api.get_collection_versions(namespace, name)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 56, in wrapped
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 192, in _call_galaxy
self._add_auth_token(headers, url, required=auth_required)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 222, in _add_auth_token
headers.update(self.token.headers())
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/token.py", line 148, in headers
token = self.get()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/token.py", line 140, in get
return self.config.get('token', None)
AttributeError: 'str' object has no attribute 'get'
```
|
https://github.com/ansible/ansible/issues/70887
|
https://github.com/ansible/ansible/pull/70911
|
932ba3616067007fd5e449611a34e7e3837fc8ae
|
aa56a2ff6a56697342908cc0cc85a537ecea4325
| 2020-07-24T21:45:10Z |
python
| 2020-12-21T19:53:00Z |
changelogs/fragments/70887_galaxy_token.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,887 |
ansible-galaxy crashes with invalid token file
|
##### SUMMARY
I had a galaxy-token file which wasn't valid YAML and thus `ansible-galaxy` when crashing when installing a collection. While it's fine that it's failing, it should provide some details about why it failed.
@jborean93
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.10
config file = /Users/kbreit/Documents/Programming/config_check/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS 10.15.5
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to install a module using `ansible-galaxy collection install` with an invalid syntax'd YAML file for your ~/.ansible/galaxy_token`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
config_check [kbreit/diff●●] % ansible-galaxy collection install cisco.nxos -vvv
ansible-galaxy 2.9.10
config file = /Users/kbreit/Documents/Programming/config_check/ansible.cfg
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible-galaxy
python version = 3.8.5 (default, Jul 21 2020, 10:48:26) [Clang 11.0.3 (clang-1103.0.32.62)]
Using /Users/kbreit/Documents/Programming/config_check/ansible.cfg as config file
Process install dependency map
Opened /Users/kbreit/.ansible/galaxy_token
Processing requirement collection 'cisco.nxos'
ERROR! Unexpected Exception, this is probably a bug: 'str' object has no attribute 'get'
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/cli/galaxy.py", line 836, in execute_install
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 457, in install_collections
dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 821, in _build_dependency_map
_get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis,
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", l 1 token: e2167b20b5f989bd113fedbefcc345261e9eee8b
ine 894, in _get_collection_info
collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/collection.py", line 346, in from_name
resp = api.get_collection_versions(namespace, name)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 56, in wrapped
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 192, in _call_galaxy
self._add_auth_token(headers, url, required=auth_required)
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/api.py", line 222, in _add_auth_token
headers.update(self.token.headers())
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/token.py", line 148, in headers
token = self.get()
File "/usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible/galaxy/token.py", line 140, in get
return self.config.get('token', None)
AttributeError: 'str' object has no attribute 'get'
```
|
https://github.com/ansible/ansible/issues/70887
|
https://github.com/ansible/ansible/pull/70911
|
932ba3616067007fd5e449611a34e7e3837fc8ae
|
aa56a2ff6a56697342908cc0cc85a537ecea4325
| 2020-07-24T21:45:10Z |
python
| 2020-12-21T19:53:00Z |
lib/ansible/galaxy/token.py
|
########################################################################
#
# (C) 2015, Chris Houseknecht <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import os
import json
from stat import S_IRUSR, S_IWUSR
import yaml
from ansible import constants as C
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
display = Display()
class NoTokenSentinel(object):
""" Represents an ansible.cfg server with not token defined (will ignore cmdline and GALAXY_TOKEN_PATH. """
def __new__(cls, *args, **kwargs):
return cls
class KeycloakToken(object):
'''A token granted by a Keycloak server.
Like sso.redhat.com as used by cloud.redhat.com
ie Automation Hub'''
token_type = 'Bearer'
def __init__(self, access_token=None, auth_url=None, validate_certs=True):
self.access_token = access_token
self.auth_url = auth_url
self._token = None
self.validate_certs = validate_certs
def _form_payload(self):
return 'grant_type=refresh_token&client_id=cloud-services&refresh_token=%s' % self.access_token
def get(self):
if self._token:
return self._token
# - build a request to POST to auth_url
# - body is form encoded
# - 'request_token' is the offline token stored in ansible.cfg
# - 'grant_type' is 'refresh_token'
# - 'client_id' is 'cloud-services'
# - should probably be based on the contents of the
# offline_ticket's JWT payload 'aud' (audience)
# or 'azp' (Authorized party - the party to which the ID Token was issued)
payload = self._form_payload()
resp = open_url(to_native(self.auth_url),
data=payload,
validate_certs=self.validate_certs,
method='POST',
http_agent=user_agent())
# TODO: handle auth errors
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
# - extract 'access_token'
self._token = data.get('access_token')
return self._token
def headers(self):
headers = {}
headers['Authorization'] = '%s %s' % (self.token_type, self.get())
return headers
class GalaxyToken(object):
''' Class to storing and retrieving local galaxy token '''
token_type = 'Token'
def __init__(self, token=None):
self.b_file = to_bytes(C.GALAXY_TOKEN_PATH, errors='surrogate_or_strict')
# Done so the config file is only opened when set/get/save is called
self._config = None
self._token = token
@property
def config(self):
if self._config is None:
self._config = self._read()
# Prioritise the token passed into the constructor
if self._token:
self._config['token'] = None if self._token is NoTokenSentinel else self._token
return self._config
def _read(self):
action = 'Opened'
if not os.path.isfile(self.b_file):
# token file not found, create and chomd u+rw
open(self.b_file, 'w').close()
os.chmod(self.b_file, S_IRUSR | S_IWUSR) # owner has +rw
action = 'Created'
with open(self.b_file, 'r') as f:
config = yaml.safe_load(f)
display.vvv('%s %s' % (action, to_text(self.b_file)))
return config or {}
def set(self, token):
self._token = token
self.save()
def get(self):
return self.config.get('token', None)
def save(self):
with open(self.b_file, 'w') as f:
yaml.safe_dump(self.config, f, default_flow_style=False)
def headers(self):
headers = {}
token = self.get()
if token:
headers['Authorization'] = '%s %s' % (self.token_type, self.get())
return headers
class BasicAuthToken(object):
token_type = 'Basic'
def __init__(self, username, password=None):
self.username = username
self.password = password
self._token = None
@staticmethod
def _encode_token(username, password):
token = "%s:%s" % (to_text(username, errors='surrogate_or_strict'),
to_text(password, errors='surrogate_or_strict', nonstring='passthru') or '')
b64_val = base64.b64encode(to_bytes(token, encoding='utf-8', errors='surrogate_or_strict'))
return to_text(b64_val)
def get(self):
if self._token:
return self._token
self._token = self._encode_token(self.username, self.password)
return self._token
def headers(self):
headers = {}
headers['Authorization'] = '%s %s' % (self.token_type, self.get())
return headers
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,369 |
get_url: check mode not supported when using checksum URL
|
##### SUMMARY
<!--- Explain the problem briefly below -->
get_url does not support check mode (`--check`) when using the checksum format `<algorithm>:<url>`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/nbe/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/nbe/.local/lib/python2.7/site-packages/ansible
executable location = /home/nbe/.local/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Ubuntu 18.04.3 LTS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run playbook in check mode: `ansible-playbook --check site.yml`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
connection: local
vars:
vault_version: '1.2.2'
tasks:
- name: Download zip archive using checksum
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64.zip'
checksum: 'sha256:7725b35d9ca8be3668abe63481f0731ca4730509419b4eb29fa0b0baa4798458'
- name: Download zip archive using checksum url
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64_2.zip'
checksum: 'sha256:https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_SHA256SUMS'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` must not fail and should show the same result as the first task (ok/changed) in check mode.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` fails in check mode:
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"checksum": "sha256:https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS",
"client_cert": null,
"client_key": null,
"content": null,
"delimiter": null,
"dest": "/tmp/vault_1.2.2_linux_amd64.zip",
"directory_mode": null,
"follow": false,
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"sha256sum": "",
"src": null,
"timeout": 10,
"tmp_dest": null,
"unsafe_writes": null,
"url": "https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_linux_amd64.zip",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
}
},
"msg": "Unable to find a checksum for file 'vault_1.2.2_linux_amd64.zip' in 'https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS'"
}
```
|
https://github.com/ansible/ansible/issues/61369
|
https://github.com/ansible/ansible/pull/66700
|
aa56a2ff6a56697342908cc0cc85a537ecea4325
|
42bc03f0f5740f2340fcdbe75557920552622ac3
| 2019-08-27T12:32:44Z |
python
| 2020-12-22T16:04:42Z |
changelogs/fragments/61369_get_url.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,369 |
get_url: check mode not supported when using checksum URL
|
##### SUMMARY
<!--- Explain the problem briefly below -->
get_url does not support check mode (`--check`) when using the checksum format `<algorithm>:<url>`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/nbe/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/nbe/.local/lib/python2.7/site-packages/ansible
executable location = /home/nbe/.local/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Ubuntu 18.04.3 LTS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run playbook in check mode: `ansible-playbook --check site.yml`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
connection: local
vars:
vault_version: '1.2.2'
tasks:
- name: Download zip archive using checksum
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64.zip'
checksum: 'sha256:7725b35d9ca8be3668abe63481f0731ca4730509419b4eb29fa0b0baa4798458'
- name: Download zip archive using checksum url
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64_2.zip'
checksum: 'sha256:https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_SHA256SUMS'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` must not fail and should show the same result as the first task (ok/changed) in check mode.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` fails in check mode:
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"checksum": "sha256:https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS",
"client_cert": null,
"client_key": null,
"content": null,
"delimiter": null,
"dest": "/tmp/vault_1.2.2_linux_amd64.zip",
"directory_mode": null,
"follow": false,
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"sha256sum": "",
"src": null,
"timeout": 10,
"tmp_dest": null,
"unsafe_writes": null,
"url": "https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_linux_amd64.zip",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
}
},
"msg": "Unable to find a checksum for file 'vault_1.2.2_linux_amd64.zip' in 'https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS'"
}
```
|
https://github.com/ansible/ansible/issues/61369
|
https://github.com/ansible/ansible/pull/66700
|
aa56a2ff6a56697342908cc0cc85a537ecea4325
|
42bc03f0f5740f2340fcdbe75557920552622ac3
| 2019-08-27T12:32:44Z |
python
| 2020-12-22T16:04:42Z |
lib/ansible/modules/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated and will be removed in version 2.14. Use
option C(checksum) instead.
default: ''
type: str
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and has been removed in version 2.10.
type: dict
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
use_gssapi:
description:
- Use GSSAPI to perform the authentication, typically this is for Kerberos or Kerberos through Negotiate
authentication.
- Requires the Python library L(gssapi,https://github.com/pythongssapi/python-gssapi) to be installed.
- Credentials for GSSAPI can be specified with I(url_username)/I(url_password) or with the GSSAPI env var
C(KRB5CCNAME) that specified a custom Kerberos credential cache.
- NTLM authentication is C(not) supported even if the GSSAPI mech for NTLM has been installed.
type: bool
default: no
version_added: '2.11'
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(ansible.windows.win_get_url) module instead.
seealso:
- module: ansible.builtin.uri
- module: ansible.windows.win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), status_code=info['status'], elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
def is_url(checksum):
"""
Returns True if checksum value has supported URL scheme, else False."""
supported_schemes = ('http', 'https', 'ftp', 'file')
return urlsplit(checksum).scheme in supported_schemes
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='dict'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead',
version='2.13', collection_name='ansible.builtin')
if module.params.get('sha256sum'):
module.deprecate('The parameter "sha256sum" has been deprecated and will be removed, use "checksum" instead',
version='2.14', collection_name='ansible.builtin')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
headers = module.params['headers']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if is_url(checksum):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = []
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map.append((parts[0], parts[1]))
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest, unsafe_writes=module.params['unsafe_writes'])
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
file_args = module.load_file_common_arguments(module.params, path=dest)
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,369 |
get_url: check mode not supported when using checksum URL
|
##### SUMMARY
<!--- Explain the problem briefly below -->
get_url does not support check mode (`--check`) when using the checksum format `<algorithm>:<url>`.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/nbe/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/nbe/.local/lib/python2.7/site-packages/ansible
executable location = /home/nbe/.local/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
- Ubuntu 18.04.3 LTS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run playbook in check mode: `ansible-playbook --check site.yml`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
connection: local
vars:
vault_version: '1.2.2'
tasks:
- name: Download zip archive using checksum
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64.zip'
checksum: 'sha256:7725b35d9ca8be3668abe63481f0731ca4730509419b4eb29fa0b0baa4798458'
- name: Download zip archive using checksum url
get_url:
url: 'https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_linux_amd64.zip'
dest: '/tmp/vault_{{ vault_version }}_linux_amd64_2.zip'
checksum: 'sha256:https://releases.hashicorp.com/vault/{{ vault_version }}/vault_{{ vault_version }}_SHA256SUMS'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` must not fail and should show the same result as the first task (ok/changed) in check mode.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The second task `'Download zip archive using checksum url'` using the format `<algorithm>:<url>` fails in check mode:
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"checksum": "sha256:https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS",
"client_cert": null,
"client_key": null,
"content": null,
"delimiter": null,
"dest": "/tmp/vault_1.2.2_linux_amd64.zip",
"directory_mode": null,
"follow": false,
"force": false,
"force_basic_auth": false,
"group": null,
"headers": null,
"http_agent": "ansible-httpget",
"mode": null,
"owner": null,
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"sha256sum": "",
"src": null,
"timeout": 10,
"tmp_dest": null,
"unsafe_writes": null,
"url": "https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_linux_amd64.zip",
"url_password": null,
"url_username": null,
"use_proxy": true,
"validate_certs": true
}
},
"msg": "Unable to find a checksum for file 'vault_1.2.2_linux_amd64.zip' in 'https://releases.hashicorp.com/vault/1.2.2/vault_1.2.2_SHA256SUMS'"
}
```
|
https://github.com/ansible/ansible/issues/61369
|
https://github.com/ansible/ansible/pull/66700
|
aa56a2ff6a56697342908cc0cc85a537ecea4325
|
42bc03f0f5740f2340fcdbe75557920552622ac3
| 2019-08-27T12:32:44Z |
python
| 2020-12-22T16:04:42Z |
test/integration/targets/get_url/tasks/main.yml
|
# Test code for the get_url module
# (c) 2014, Richard Isaacson <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
- name: Determine if python looks like it will support modern ssl features like SNI
command: "{{ ansible_python.executable }} -c 'from ssl import SSLContext'"
ignore_errors: True
register: python_test
- name: Set python_has_sslcontext if we have it
set_fact:
python_has_ssl_context: True
when: python_test.rc == 0
- name: Set python_has_sslcontext False if we don't have it
set_fact:
python_has_ssl_context: False
when: python_test.rc != 0
- name: Define test files for file schema
set_fact:
geturl_srcfile: "{{ remote_tmp_dir }}/aurlfile.txt"
geturl_dstfile: "{{ remote_tmp_dir }}/aurlfile_copy.txt"
- name: Create source file
copy:
dest: "{{ geturl_srcfile }}"
content: "foobar"
register: source_file_copied
- name: test file fetch
get_url:
url: "file://{{ source_file_copied.dest }}"
dest: "{{ geturl_dstfile }}"
register: result
- name: assert success and change
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test nonexisting file fetch
get_url:
url: "file://{{ source_file_copied.dest }}NOFILE"
dest: "{{ geturl_dstfile }}NOFILE"
register: result
ignore_errors: True
- name: assert success and change
assert:
that:
- result is failed
- name: test HTTP HEAD request for file in check mode
get_url:
url: "https://{{ httpbin_host }}/get"
dest: "{{ remote_tmp_dir }}/get_url_check.txt"
force: yes
check_mode: True
register: result
- name: assert that the HEAD request was successful in check mode
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test HTTP HEAD for nonexistent URL in check mode
get_url:
url: "https://{{ httpbin_host }}/DOESNOTEXIST"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
force: yes
check_mode: True
register: result
ignore_errors: True
- name: assert that HEAD request for nonexistent URL failed
assert:
that:
- result is failed
- name: test https fetch
get_url: url="https://{{ httpbin_host }}/get" dest={{remote_tmp_dir}}/get_url.txt force=yes
register: result
- name: assert the get_url call was successful
assert:
that:
- result is changed
- '"OK" in result.msg'
- name: test https fetch to a site with mismatched hostname and certificate
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/shouldnotexist.html"
ignore_errors: True
register: result
- stat:
path: "{{ remote_tmp_dir }}/shouldnotexist.html"
register: stat_result
- name: Assert that the file was not downloaded
assert:
that:
- "result is failed"
- "'Failed to validate the SSL certificate' in result.msg or 'Hostname mismatch' in result.msg or ( result.msg is match('hostname .* doesn.t match .*'))"
- "stat_result.stat.exists == false"
- name: test https fetch to a site with mismatched hostname and certificate and validate_certs=no
get_url:
url: "https://{{ badssl_host }}/"
dest: "{{ remote_tmp_dir }}/get_url_no_validate.html"
validate_certs: no
register: result
- stat:
path: "{{ remote_tmp_dir }}/get_url_no_validate.html"
register: stat_result
- name: Assert that the file was downloaded
assert:
that:
- result is changed
- "stat_result.stat.exists == true"
# SNI Tests
# SNI is only built into the stdlib from python-2.7.9 onwards
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# These tests are just side effects of how the site is hosted. It's not
# specifically a test site. So the tests may break due to the hosting changing
- name: Test that SNI works
get_url:
url: 'https://{{ sni_host }}/'
dest: "{{ remote_tmp_dir }}/sni.html"
register: get_url_result
ignore_errors: True
- command: "grep '{{ sni_host }}' {{ remote_tmp_dir}}/sni.html"
register: data_result
when: python_has_ssl_context
- debug:
var: get_url_result
- name: Assert that SNI works with this python version
assert:
that:
- 'data_result.rc == 0'
- 'get_url_result is not failed'
when: python_has_ssl_context
# If the client doesn't support SNI then get_url should have failed with a certificate mismatch
- name: Assert that hostname verification failed because SNI is not supported on this version of python
assert:
that:
- 'get_url_result is failed'
when: not python_has_ssl_context
# End hacky SNI test section
- name: Test get_url with redirect
get_url:
url: 'https://{{ httpbin_host }}/redirect/6'
dest: "{{ remote_tmp_dir }}/redirect.json"
- name: Test that setting file modes work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0707'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0707'"
- name: Test that setting file modes on an already downloaded file work
get_url:
url: 'https://{{ httpbin_host }}/'
dest: '{{ remote_tmp_dir }}/test'
mode: '0070'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that the file has the right permissions
assert:
that:
- result is changed
- "stat_result.stat.mode == '0070'"
# https://github.com/ansible/ansible/pull/65307/
- name: Test that on http status 304, we get a status_code field.
get_url:
url: 'https://{{ httpbin_host }}/status/304'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we get the appropriate status_code
assert:
that:
- "'status_code' in result"
- "result.status_code == 304"
# https://github.com/ansible/ansible/issues/29614
- name: Change mode on an already downloaded file and specify checksum
get_url:
url: 'https://{{ httpbin_host }}/base64/cHR1eA=='
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006.'
mode: '0775'
register: result
- stat:
path: "{{ remote_tmp_dir }}/test"
register: stat_result
- name: Assert that file permissions on already downloaded file were changed
assert:
that:
- result is changed
- "stat_result.stat.mode == '0775'"
- name: test checksum match in check mode
get_url:
url: 'https://{{ httpbin_host }}/base64/cHR1eA=='
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha256:b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006.'
check_mode: True
register: result
- name: Assert that check mode was green
assert:
that:
- result is not changed
- name: Get a file that already exists with a checksum
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
checksum: 'sha1:{{ stat_result.stat.checksum }}'
register: result
- name: Assert that the file was not downloaded
assert:
that:
- result.msg == 'file already exists'
- name: Get a file that already exists
get_url:
url: 'https://{{ httpbin_host }}/cache'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we didn't re-download unnecessarily
assert:
that:
- result is not changed
- "'304' in result.msg"
- name: get a file that doesn't respond to If-Modified-Since without checksum
get_url:
url: 'https://{{ httpbin_host }}/get'
dest: '{{ remote_tmp_dir }}/test'
register: result
- name: Assert that we downloaded the file
assert:
that:
- result is changed
# https://github.com/ansible/ansible/issues/27617
- name: set role facts
set_fact:
http_port: 27617
files_dir: '{{ remote_tmp_dir }}/files'
- name: create files_dir
file:
dest: "{{ files_dir }}"
state: directory
- name: create src file
copy:
dest: '{{ files_dir }}/27617.txt'
content: "ptux"
- name: create duplicate src file
copy:
dest: '{{ files_dir }}/71420.txt'
content: "ptux"
- name: create sha1 checksum file of src
copy:
dest: '{{ files_dir }}/sha1sum.txt'
content: |
a97e6837f60cec6da4491bab387296bbcd72bdba 27617.txt
a97e6837f60cec6da4491bab387296bbcd72bdba 71420.txt
3911340502960ca33aece01129234460bfeb2791 not_target1.txt
1b4b6adf30992cedb0f6edefd6478ff0a593b2e4 not_target2.txt
- name: create sha256 checksum file of src
copy:
dest: '{{ files_dir }}/sha256sum.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 27617.txt
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. 71420.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b not_target2.txt
- name: create sha256 checksum file of src with a dot leading path
copy:
dest: '{{ files_dir }}/sha256sum_with_dot.txt'
content: |
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./27617.txt
b1b6ce5073c8fac263a8fc5edfffdbd5dec1980c784e09c5bc69f8fb6056f006. ./71420.txt
30949cc401e30ac494d695ab8764a9f76aae17c5d73c67f65e9b558f47eff892 ./not_target1.txt
d0dbfc1945bc83bf6606b770e442035f2c4e15c886ee0c22fb3901ba19900b5b ./not_target2.txt
- copy:
src: "testserver.py"
dest: "{{ remote_tmp_dir }}/testserver.py"
- name: start SimpleHTTPServer for issues 27617
shell: cd {{ files_dir }} && {{ ansible_python.executable }} {{ remote_tmp_dir}}/testserver.py {{ http_port }}
async: 90
poll: 0
- name: Wait for SimpleHTTPServer to come up online
wait_for:
host: 'localhost'
port: '{{ http_port }}'
state: started
- name: download src with sha1 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha1
- name: download src with sha256 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt'
register: result_sha256
- stat:
path: "{{ remote_tmp_dir }}/27617.txt"
register: stat_result_sha256
- name: download src with sha256 checksum url with dot leading paths
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_dot.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt'
register: result_sha256_with_dot
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt"
register: stat_result_sha256_with_dot
- name: download src with sha256 checksum url with file scheme
get_url:
url: 'http://localhost:{{ http_port }}/27617.txt'
dest: '{{ remote_tmp_dir }}/27617sha256_with_file_scheme.txt'
checksum: 'sha256:file://{{ files_dir }}/sha256sum.txt'
register: result_sha256_with_file_scheme
- stat:
path: "{{ remote_tmp_dir }}/27617sha256_with_dot.txt"
register: stat_result_sha256_with_file_scheme
- name: download 71420.txt with sha1 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}'
checksum: 'sha1:http://localhost:{{ http_port }}/sha1sum.txt'
register: result_sha1_71420
- stat:
path: "{{ remote_tmp_dir }}/71420.txt"
register: stat_result_sha1_71420
- name: download 71420.txt with sha256 checksum url
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum.txt'
register: result_sha256_71420
- stat:
path: "{{ remote_tmp_dir }}/71420.txt"
register: stat_result_sha256_71420
- name: download 71420.txt with sha256 checksum url with dot leading paths
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256_with_dot.txt'
checksum: 'sha256:http://localhost:{{ http_port }}/sha256sum_with_dot.txt'
register: result_sha256_with_dot_71420
- stat:
path: "{{ remote_tmp_dir }}/71420sha256_with_dot.txt"
register: stat_result_sha256_with_dot_71420
- name: download 71420.txt with sha256 checksum url with file scheme
get_url:
url: 'http://localhost:{{ http_port }}/71420.txt'
dest: '{{ remote_tmp_dir }}/71420sha256_with_file_scheme.txt'
checksum: 'sha256:file://{{ files_dir }}/sha256sum.txt'
register: result_sha256_with_file_scheme_71420
- stat:
path: "{{ remote_tmp_dir }}/71420sha256_with_dot.txt"
register: stat_result_sha256_with_file_scheme_71420
- name: Assert that the file was downloaded
assert:
that:
- result_sha1 is changed
- result_sha256 is changed
- result_sha256_with_dot is changed
- result_sha256_with_file_scheme is changed
- "stat_result_sha1.stat.exists == true"
- "stat_result_sha256.stat.exists == true"
- "stat_result_sha256_with_dot.stat.exists == true"
- "stat_result_sha256_with_file_scheme.stat.exists == true"
- result_sha1_71420 is changed
- result_sha256_71420 is changed
- result_sha256_with_dot_71420 is changed
- result_sha256_with_file_scheme_71420 is changed
- "stat_result_sha1_71420.stat.exists == true"
- "stat_result_sha256_71420.stat.exists == true"
- "stat_result_sha256_with_dot_71420.stat.exists == true"
- "stat_result_sha256_with_file_scheme_71420.stat.exists == true"
#https://github.com/ansible/ansible/issues/16191
- name: Test url split with no filename
get_url:
url: https://{{ httpbin_host }}
dest: "{{ remote_tmp_dir }}"
- name: Test headers dict
get_url:
url: https://{{ httpbin_host }}/headers
headers:
Foo: bar
Baz: qux
dest: "{{ remote_tmp_dir }}/headers_dict.json"
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/headers_dict.json"
register: result
- name: Test headers dict
assert:
that:
- (result.content | b64decode | from_json).headers.get('Foo') == 'bar'
- (result.content | b64decode | from_json).headers.get('Baz') == 'qux'
- name: Test client cert auth, with certs
get_url:
url: "https://ansible.http.tests/ssl_client_verify"
client_cert: "{{ remote_tmp_dir }}/client.pem"
client_key: "{{ remote_tmp_dir }}/client.key"
dest: "{{ remote_tmp_dir }}/ssl_client_verify"
when: has_httptester
- name: Get downloaded file
slurp:
src: "{{ remote_tmp_dir }}/ssl_client_verify"
register: result
when: has_httptester
- name: Assert that the ssl_client_verify file contains the correct content
assert:
that:
- '(result.content | b64decode) == "ansible.http.tests:SUCCESS"'
when: has_httptester
- name: Test use_gssapi=True
include_tasks:
file: use_gssapi.yml
apply:
environment:
KRB5_CONFIG: '{{ krb5_config }}'
KRB5CCNAME: FILE:{{ remote_tmp_dir }}/krb5.cc
when: krb5_config is defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,331 |
Collect facts from truenas, variable missing : ansible_distribution_major_version
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When i collect facts from TrueNAS, with the command : `ansible truenas -m setup | grep "ansible_distribution"`
the variable "ansible_distribution_major_version" is missing.
When i use same command on debian host, the variable is stell here
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/facts
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "" between quotes -->
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
ansible truenas -m setup | grep "ansible_distribution"
##### EXPECTED RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_major_version": "12",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
##### ACTUAL RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
|
https://github.com/ansible/ansible/issues/72331
|
https://github.com/ansible/ansible/pull/73020
|
003a9e890db3a2660fe1a2d95f00dec356b2f3e7
|
20509b65071291ab6cbe8a279c734902fb4e8383
| 2020-10-24T19:19:13Z |
python
| 2021-01-05T15:16:59Z |
changelogs/fragments/72331-truenas-rc-major-version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,331 |
Collect facts from truenas, variable missing : ansible_distribution_major_version
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When i collect facts from TrueNAS, with the command : `ansible truenas -m setup | grep "ansible_distribution"`
the variable "ansible_distribution_major_version" is missing.
When i use same command on debian host, the variable is stell here
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/facts
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "" between quotes -->
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
ansible truenas -m setup | grep "ansible_distribution"
##### EXPECTED RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_major_version": "12",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
##### ACTUAL RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
|
https://github.com/ansible/ansible/issues/72331
|
https://github.com/ansible/ansible/pull/73020
|
003a9e890db3a2660fe1a2d95f00dec356b2f3e7
|
20509b65071291ab6cbe8a279c734902fb4e8383
| 2020-10-24T19:19:13Z |
python
| 2021-01-05T15:16:59Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
OSDIST_LIST = (
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT Linux',
'ClearLinux': 'Clear Linux Software for Intel Architecture',
'SMGL': 'Source Mage GNU/Linux',
}
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family'])
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,331 |
Collect facts from truenas, variable missing : ansible_distribution_major_version
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When i collect facts from TrueNAS, with the command : `ansible truenas -m setup | grep "ansible_distribution"`
the variable "ansible_distribution_major_version" is missing.
When i use same command on debian host, the variable is stell here
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/facts
##### ANSIBLE VERSION
```
ansible 2.10.2
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "" between quotes -->
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
ansible truenas -m setup | grep "ansible_distribution"
##### EXPECTED RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_major_version": "12",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
##### ACTUAL RESULTS
```
...
"ansible_distribution": "FreeBSD",
"ansible_distribution_release": "12.2-RC3",
"ansible_distribution_version": "FreeBSD 12.2-RC3 7c4ec6ff02c(HEAD) TRUENAS",
...
```
|
https://github.com/ansible/ansible/issues/72331
|
https://github.com/ansible/ansible/pull/73020
|
003a9e890db3a2660fe1a2d95f00dec356b2f3e7
|
20509b65071291ab6cbe8a279c734902fb4e8383
| 2020-10-24T19:19:13Z |
python
| 2021-01-05T15:16:59Z |
test/units/module_utils/facts/system/distribution/fixtures/truenas_12.0rc1.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,019 |
bad header rendering on latest docs
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When visiting https://docs.ansible.com/ansible/latest/collections/community/general/docker_container_module.html the header seems oddly rendered, as it contains two messages right now:

<!--- Explain the problem briefly below, add suggestions to wording or structure -->
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docsite
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/73019
|
https://github.com/ansible/ansible/pull/73119
|
de3844cba9b801c99119c525e1dc6881be3b5ca3
|
0a3c5d9dcc5d8b17ba73634b7d5398ba22fa3c5e
| 2020-12-18T08:12:47Z |
python
| 2021-01-05T21:40:20Z |
docs/docsite/_themes/sphinx_rtd_theme/ansible_banner.html
|
<!--- Based on sphinx versionwarning extension. Extension currently only works on READTHEDOCS -->
<script>
startsWith = function(str, needle) {
return str.slice(0, needle.length) == needle
}
// Create a banner if we're not on the official docs site
if (location.host == "docs.testing.ansible.com") {
document.write('<div id="testing_banner_id" class="admonition important">');
document.write('<p>This is the testing site for Ansible Documentation. Unless you are reviewing pre-production changes, please visit the <a href="https://docs.ansible.com/ansible/latest/">official documentation website</a>.</p> <p></p>');
document.write('</div>');
}
{% if (not READTHEDOCS) and (available_versions is defined) %}
// Create a banner if we're not the latest version
current_url_path = window.location.pathname;
if (startsWith(current_url_path, "/ansible/latest/") || startsWith(current_url_path, "/ansible/{{ latest_version }}/")) {
/* temp banner to advertise survey */
document.write('<div id="banner_id" class="admonition important">');
document.write('<br><p>Please take our <a href="https://www.surveymonkey.co.uk/r/B9V3CDY">Docs survey</a> before December 31 to help us improve Ansible documentation.</p>');
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the latest community version of the Ansible documentation. Red Hat subscribers, select <b>2.9</b> in the version selection to the left for the most recent Red Hat release.</p>');
document.write('</div>');
document.write('</div>');
} else if (startsWith(current_url_path, "/ansible/2.9/")) {
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the latest Red Hat released version of the Ansible documentation. Community users can use this, or select any version in version selection to the left, including <b>latest</b> for the most recent community version.</p>');
document.write('</div>');
} else if (startsWith(current_url_path, "/ansible/devel/")) {
/* temp banner to advertise survey */
document.write('<div id="banner_id" class="admonition important">');
document.write('<br><p>Please take our <a href="https://www.surveymonkey.co.uk/r/B9V3CDY">Docs survey</a> before December 31 to help us improve Ansible documentation.</p><br>');
document.write('</div>');
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading the <b>devel</b> version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.</p>');
document.write('</div>');
} else {
document.write('<div id="banner_id" class="admonition caution">');
document.write('<p>You are reading an older version of the Ansible documentation. Use the version selection to the left if you want the latest stable released version.</p>');
document.write('</div>');
}
{% endif %}
</script>
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,950 |
Rewrite Docker scenario guide
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The documentation says you need the `docker` pip package installed on the host running ansible. However, I do not have that package installed (`pip3 freeze | grep docker` returns nothing) but Ansible gives me an error when it's not installed on the server I try to provision. Therefore I am assuming that you need it on remote but not on the host.
Documentation page for the issue is `https://docs.ansible.com/ansible/latest/scenario_guides/guide_docker.html`
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docker_*
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/natsu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
(It is empty, no output)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Ubuntu running inside WSL2, version is 20.04.1
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
I think this could be caused by using Docker with the WSL2 integration. However, since Ansible tries to use the Python package and I ensured it really isn't installed inside my WSL2 instance (running `import docker` inside an interactive Python shell) I believe that this is more a bug in the documentation rather than having to do with said integration.
If helpful, I can also try to replicate this in an Ubuntu VM rather than WSL2.
|
https://github.com/ansible/ansible/issues/72950
|
https://github.com/ansible/ansible/pull/73069
|
1d8760779c25b2dbbe902bda799408b1cb1f0556
|
c9f28c1735e892cf09654028b3f85e71ada1e032
| 2020-12-11T17:39:33Z |
python
| 2021-01-06T18:57:14Z |
docs/docsite/rst/scenario_guides/guide_docker.rst
|
Docker Guide
============
Ansible offers the following modules for orchestrating Docker containers:
docker_compose
Use your existing Docker compose files to orchestrate containers on a single Docker daemon or on
Swarm. Supports compose versions 1 and 2.
docker_container
Manages the container lifecycle by providing the ability to create, update, stop, start and destroy a
container.
docker_image
Provides full control over images, including: build, pull, push, tag and remove.
docker_image_info
Inspects one or more images in the Docker host's image cache, providing the information for making
decision or assertions in a playbook.
docker_login
Authenticates with Docker Hub or any Docker registry and updates the Docker Engine config file, which
in turn provides password-free pushing and pulling of images to and from the registry.
docker (dynamic inventory)
Dynamically builds an inventory of all the available containers from a set of one or more Docker hosts.
Ansible 2.1.0 includes major updates to the Docker modules, marking the start of a project to create a complete and
integrated set of tools for orchestrating containers. In addition to the above modules, we are also working on the
following:
Still using Dockerfile to build images? Check out `ansible-bender <https://github.com/ansible-community/ansible-bender>`_,
and start building images from your Ansible playbooks.
Use `Ansible Operator <https://learn.openshift.com/ansibleop/ansible-operator-overview/>`_
to launch your docker-compose file on `OpenShift <https://www.okd.io/>`_. Go from an app on your laptop to a fully
scalable app in the cloud with Kubernetes in just a few moments.
There's more planned. See the latest ideas and thinking at the `Ansible proposal repo <https://github.com/ansible/proposals/tree/master/docker>`_.
Requirements
------------
Using the docker modules requires having the `Docker SDK for Python <https://docker-py.readthedocs.io/en/stable/>`_
installed on the host running Ansible. You will need to have >= 1.7.0 installed. For Python 2.7 or
Python 3, you can install it as follows:
.. code-block:: bash
$ pip install docker
For Python 2.6, you need a version before 2.0. For these versions, the SDK was called ``docker-py``,
so you need to install it as follows:
.. code-block:: bash
$ pip install 'docker-py>=1.7.0'
Please note that only one of ``docker`` and ``docker-py`` must be installed. Installing both will result in
a broken installation. If this happens, Ansible will detect it and inform you about it::
Cannot have both the docker-py and docker python modules installed together as they use the same
namespace and cause a corrupt installation. Please uninstall both packages, and re-install only
the docker-py or docker python module. It is recommended to install the docker module if no support
for Python 2.6 is required. Please note that simply uninstalling one of the modules can leave the
other module in a broken state.
The docker_compose module also requires `docker-compose <https://github.com/docker/compose>`_
.. code-block:: bash
$ pip install 'docker-compose>=1.7.0'
Connecting to the Docker API
----------------------------
You can connect to a local or remote API using parameters passed to each task or by setting environment variables.
The order of precedence is command line parameters and then environment variables. If neither a command line
option or an environment variable is found, a default value will be used. The default values are provided under
`Parameters`_
Parameters
..........
Control how modules connect to the Docker API by passing the following parameters:
docker_host
The URL or Unix socket path used to connect to the Docker API. Defaults to ``unix://var/run/docker.sock``.
To connect to a remote host, provide the TCP connection string. For example: ``tcp://192.0.2.23:2376``. If
TLS is used to encrypt the connection to the API, then the module will automatically replace 'tcp' in the
connection URL with 'https'.
api_version
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
by docker-py.
timeout
The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.
tls
Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server.
Defaults to False.
tls_verify
Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
Default is False.
cacert_path
Use a CA certificate when performing server verification by providing the path to a CA certificate file.
cert_path
Path to the client's TLS certificate file.
key_path
Path to the client's TLS key file.
tls_hostname
When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults
to 'localhost'.
ssl_version
Provide a valid SSL version number. Default value determined by docker-py, which at the time of this writing
was 1.0
Environment Variables
.....................
Control how the modules connect to the Docker API by setting the following variables in the environment of the host
running Ansible:
DOCKER_HOST
The URL or Unix socket path used to connect to the Docker API.
DOCKER_API_VERSION
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
by docker-py.
DOCKER_TIMEOUT
The maximum amount of time in seconds to wait on a response from the API.
DOCKER_CERT_PATH
Path to the directory containing the client certificate, client key and CA certificate.
DOCKER_SSL_VERSION
Provide a valid SSL version number.
DOCKER_TLS
Secure the connection to the API by using TLS without verifying the authenticity of the Docker Host.
DOCKER_TLS_VERIFY
Secure the connection to the API by using TLS and verify the authenticity of the Docker Host.
Dynamic Inventory Script
------------------------
The inventory script generates dynamic inventory by making API requests to one or more Docker APIs. It's dynamic
because the inventory is generated at run-time rather than being read from a static file. The script generates the
inventory by connecting to one or many Docker APIs and inspecting the containers it finds at each API. Which APIs the
script contacts can be defined using environment variables or a configuration file.
Groups
......
The script will create the following host groups:
- container id
- container name
- container short id
- image_name (image_<image name>)
- docker_host
- running
- stopped
Examples
........
You can run the script interactively from the command line or pass it as the inventory to a playbook. Here are few
examples to get you started:
.. code-block:: bash
# Connect to the Docker API on localhost port 4243 and format the JSON output
DOCKER_HOST=tcp://localhost:4243 ./docker.py --pretty
# Any container's ssh port exposed on 0.0.0.0 will be mapped to
# another IP address (where Ansible will attempt to connect via SSH)
DOCKER_DEFAULT_IP=192.0.2.5 ./docker.py --pretty
# Run as input to a playbook:
ansible-playbook -i ./docker.py docker_inventory_test.yml
# Simple playbook to invoke with the above example:
- name: Test docker_inventory, this will not connect to any hosts
hosts: all
gather_facts: no
tasks:
- debug:
msg: "Container - {{ inventory_hostname }}"
Configuration
.............
You can control the behavior of the inventory script by defining environment variables, or
creating a docker.yml file (sample provided in https://raw.githubusercontent.com/ansible-collections/community.general/main/scripts/inventory/docker.py). The order of precedence is the docker.yml
file and then environment variables.
Environment Variables
;;;;;;;;;;;;;;;;;;;;;;
To connect to a single Docker API the following variables can be defined in the environment to control the connection
options. These are the same environment variables used by the Docker modules.
DOCKER_HOST
The URL or Unix socket path used to connect to the Docker API. Defaults to unix://var/run/docker.sock.
DOCKER_API_VERSION:
The version of the Docker API running on the Docker Host. Defaults to the latest version of the API supported
by docker-py.
DOCKER_TIMEOUT:
The maximum amount of time in seconds to wait on a response from the API. Defaults to 60 seconds.
DOCKER_TLS:
Secure the connection to the API by using TLS without verifying the authenticity of the Docker host server.
Defaults to False.
DOCKER_TLS_VERIFY:
Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
Default is False
DOCKER_TLS_HOSTNAME:
When verifying the authenticity of the Docker Host server, provide the expected name of the server. Defaults
to localhost.
DOCKER_CERT_PATH:
Path to the directory containing the client certificate, client key and CA certificate.
DOCKER_SSL_VERSION:
Provide a valid SSL version number. Default value determined by docker-py, which at the time of this writing
was 1.0
In addition to the connection variables there are a couple variables used to control the execution and output of the
script:
DOCKER_CONFIG_FILE
Path to the configuration file. Defaults to ./docker.yml.
DOCKER_PRIVATE_SSH_PORT:
The private port (container port) on which SSH is listening for connections. Defaults to 22.
DOCKER_DEFAULT_IP:
The IP address to assign to ansible_host when the container's SSH port is mapped to interface '0.0.0.0'.
Configuration File
;;;;;;;;;;;;;;;;;;
Using a configuration file provides a means for defining a set of Docker APIs from which to build an inventory.
The default name of the file is derived from the name of the inventory script. By default the script will look for
basename of the script (in other words, docker) with an extension of '.yml'.
You can also override the default name of the script by defining DOCKER_CONFIG_FILE in the environment.
Here's what you can define in docker_inventory.yml:
defaults
Defines a default connection. Defaults will be taken from this and applied to any values not provided
for a host defined in the hosts list.
hosts
If you wish to get inventory from more than one Docker host, define a hosts list.
For the default host and each host in the hosts list define the following attributes:
.. code-block:: yaml
host:
description: The URL or Unix socket path used to connect to the Docker API.
required: yes
tls:
description: Connect using TLS without verifying the authenticity of the Docker host server.
default: false
required: false
tls_verify:
description: Connect using TLS without verifying the authenticity of the Docker host server.
default: false
required: false
cert_path:
description: Path to the client's TLS certificate file.
default: null
required: false
cacert_path:
description: Use a CA certificate when performing server verification by providing the path to a CA certificate file.
default: null
required: false
key_path:
description: Path to the client's TLS key file.
default: null
required: false
version:
description: The Docker API version.
required: false
default: will be supplied by the docker-py module.
timeout:
description: The amount of time in seconds to wait on an API response.
required: false
default: 60
default_ip:
description: The IP address to assign to ansible_host when the container's SSH port is mapped to interface
'0.0.0.0'.
required: false
default: 127.0.0.1
private_ssh_port:
description: The port containers use for SSH
required: false
default: 22
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,809 |
Yum/DNF module does not remove package with version specified
|
##### SUMMARY
When using dnf or yum to remove packages, ansible will not detect package is installed if the version is added to the end
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum/dnf
##### ANSIBLE VERSION
```
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
##### OS / ENVIRONMENT
CentOS Linux release 8.0.1905 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: hosts
become: yes
tasks:
- name: install package
yum:
name: wget-1.19.5
state: present
- name: delete package
dnf:
name: wget-1-19.5
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected wget to be removed
##### ACTUAL RESULTS
Wget is registered as a package that does not exist and does nothing
```
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:1
ok: [test]
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/yumdnf.py
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uqvwnlbzvsvksqefnqqjojbqzwgxtddn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "", "changed": true, "results": ["Installed: wget-1.19.5-8.el8_1.1.x86_64"], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "present", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [install package] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:5
changed: [test] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: wget-1.19.5-8.el8_1.1.x86_64"
]
}
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-siwumenjqvrisnpnndvvpgvpcvagjbyn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "Nothing to do", "changed": false, "results": [], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "absent", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [delete package] *****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:10
ok: [test] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}
META: ran handlers
META: ran handlers
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************
test : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72809
|
https://github.com/ansible/ansible/pull/73033
|
77942acefcacae5d44d8a5f3b4f8c7228633ca1d
|
44ee04bd1f7d683fce246c16e752ace04d244b4c
| 2020-12-03T01:08:47Z |
python
| 2021-01-07T17:32:06Z |
changelogs/fragments/72809-dnf-remove-NV.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,809 |
Yum/DNF module does not remove package with version specified
|
##### SUMMARY
When using dnf or yum to remove packages, ansible will not detect package is installed if the version is added to the end
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum/dnf
##### ANSIBLE VERSION
```
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
##### OS / ENVIRONMENT
CentOS Linux release 8.0.1905 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: hosts
become: yes
tasks:
- name: install package
yum:
name: wget-1.19.5
state: present
- name: delete package
dnf:
name: wget-1-19.5
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected wget to be removed
##### ACTUAL RESULTS
Wget is registered as a package that does not exist and does nothing
```
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:1
ok: [test]
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/yumdnf.py
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uqvwnlbzvsvksqefnqqjojbqzwgxtddn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "", "changed": true, "results": ["Installed: wget-1.19.5-8.el8_1.1.x86_64"], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "present", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [install package] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:5
changed: [test] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: wget-1.19.5-8.el8_1.1.x86_64"
]
}
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-siwumenjqvrisnpnndvvpgvpcvagjbyn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "Nothing to do", "changed": false, "results": [], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "absent", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [delete package] *****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:10
ok: [test] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}
META: ran handlers
META: ran handlers
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************
test : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72809
|
https://github.com/ansible/ansible/pull/73033
|
77942acefcacae5d44d8a5f3b4f8c7228633ca1d
|
44ee04bd1f7d683fce246c16e752ace04d244b4c
| 2020-12-03T01:08:47Z |
python
| 2021-01-07T17:32:06Z |
lib/ansible/modules/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
- Comparison operators for package version are valid here C(>), C(<), C(>=), C(<=). Example - C(name>=1.0)
required: true
aliases:
- pkg
type: list
elements: str
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
type: str
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
type: str
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
type: list
elements: str
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
type: str
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
- This setting affects packages installed from a repository as well as
"local" packages installed from the filesystem or a URL.
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
type: str
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
type: str
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
type: list
elements: str
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
- Note that, similar to ``dnf upgrade-minimal``, this filter applies to dependencies as well.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
type: list
elements: str
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
type: str
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
allowerasing:
description:
- If C(yes) it allows erasing of installed packages to resolve dependencies.
required: false
type: bool
default: "no"
version_added: "2.10"
nobest:
description:
- Set best option to False, so that transactions are not limited to best candidates only.
required: false
type: bool
default: "no"
version_added: "2.11"
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: Install the latest version of Apache
dnf:
name: httpd
state: latest
- name: Install Apache >= 2.4
dnf:
name: httpd>=2.4
state: present
- name: Install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: Remove the Apache package
dnf:
name: httpd
state: absent
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Upgrade all packages
dnf:
name: "*"
state: latest
- name: Install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: Install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: Install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: Install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: Install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: Install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_file
from ansible.module_utils.six import PY2, text_type
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
# DNF specific args that are not part of YumDnf
self.allowerasing = self.module.params['allowerasing']
self.nobest = self.module.params['nobest']
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg_install(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter in an install scenario. Do that here.
"""
if (
to_text("no package matched") in to_text(error) or
to_text("No match for argument:") in to_text(error)
):
return "No package {0} available.".format(spec)
return error
def _sanitize_dnf_error_msg_remove(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to ignore in a removal scenario as known benign
failures. Do that here.
"""
if (
'no package matched' in to_native(error) or
'No match for argument:' in to_native(error)
):
return (False, "{0} is not installed".format(spec))
# Return value is tuple of:
# ("Is this actually a failure?", "Error Message")
return (True, error)
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def _ensure_dnf(self):
if not HAS_DNF:
if PY2:
package = 'python2-dnf'
else:
package = 'python3-dnf'
if self.module.check_mode:
self.module.fail_json(
msg="`{0}` is not installed, but it is required"
"for the Ansible dnf module.".format(package),
results=[],
)
rc, stdout, stderr = self.module.run_command(['dnf', 'install', '-y', package])
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
except ImportError:
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `{2}` package or ensure you have specified the "
"correct ansible_python_interpreter.".format(sys.executable, sys.version.replace('\n', ''),
package),
results=[],
cmd='dnf install -y {0}'.format(package),
rc=rc,
stdout=stdout,
stderr=stderr,
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
# Set best
if self.nobest:
conf.best = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
# this method has been supported in dnf-4.2.17-6 or later
# https://bugzilla.redhat.com/show_bug.cgi?id=1788212
base.setup_loggers()
except AttributeError:
pass
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
filters = []
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
filters.append(base.sack.query().upgrades().filter(**key))
if self.security:
key = {'advisory_type__eq': 'security'}
filters.append(base.sack.query().upgrades().filter(**key))
if filters:
base._update_security_filters = filters
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if is_newer_version_installed:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else: # Nothing to do, report back
pass
elif is_installed: # An potentially older (or same) version is installed
if upgrade:
self.base.upgrade(pkg_spec)
else: # Nothing to do, report back
pass
else: # The package is not installed, simply install it
self.base.install(pkg_spec)
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
available = self.base.sack.query().available()
pkg_spec = available.filter(provides=filepath).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = fetch_file(self.module, name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith("@") or ('/' in name):
# like "dnf install /usr/bin/vi"
if '/' in name:
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occured attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
enabled_streams = self.base._moduleContainer.getEnabledStream(nsv.name)
if enabled_streams:
if nsv.stream:
if nsv.stream in enabled_streams:
return True # The provided stream was found
else:
return False # The provided stream was not found
else:
return True # No stream provided, but module found
return False # seems like a sane default
def ensure(self):
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
if install_result['msg']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg_install(pkg_spec, install_result['failure']))
else:
if install_result['msg']:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
is_failure, handled_remove_error = self._sanitize_dnf_error_msg_remove(pkg_spec, to_native(e))
if is_failure:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
else:
response['results'].append(handled_remove_error)
continue
installed_pkg = list(map(str, installed.filter(name=pkg_spec).run()))
if installed_pkg:
candidate_pkg = self._packagename_dict(installed_pkg[0])
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
else:
candidate_pkg = self._packagename_dict(pkg_spec)
installed_pkg = installed.filter(nevra=pkg_spec).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 0:
self.base.remove(pkg_spec)
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
self.allowerasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=self.allowerasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
# If packages got installed/removed, add them to the results.
# We do this early so we can use it for both check_mode and not.
if self.download_only:
install_action = 'Downloaded'
else:
install_action = 'Installed'
for package in self.base.transaction.install_set:
response['results'].append("{0}: {1}".format(install_action, package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
# Validate GPG. This is NOT done in dnf.Base (it's done in the
# upstream CLI subclass of dnf.Base)
if not self.disable_gpg_check:
for package in self.base.transaction.install_set:
fail = False
gpgres, gpgerr = self.base._sig_check_pkg(package)
if gpgres == 0: # validated successfully
continue
elif gpgres == 1: # validation failed, install cert?
try:
self.base._get_key_for_package(package)
except dnf.exceptions.Error as e:
fail = True
else: # fatal error
fail = True
if fail:
msg = 'Failed to validate GPG signature for {0}'.format(package)
self.module.fail_json(msg)
if self.download_only:
# No further work left to do, and the results were already updated above.
# Just return them.
self.module.exit_json(**response)
else:
self.base.do_transaction()
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
return HAS_DNF
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
# Extend yumdnf_argument_spec with dnf-specific features that will never be
# backported to yum because yum is now in "maintenance mode" upstream
yumdnf_argument_spec['argument_spec']['allowerasing'] = dict(default=False, type='bool')
yumdnf_argument_spec['argument_spec']['nobest'] = dict(default=False, type='bool')
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,809 |
Yum/DNF module does not remove package with version specified
|
##### SUMMARY
When using dnf or yum to remove packages, ansible will not detect package is installed if the version is added to the end
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum/dnf
##### ANSIBLE VERSION
```
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
##### OS / ENVIRONMENT
CentOS Linux release 8.0.1905 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: hosts
become: yes
tasks:
- name: install package
yum:
name: wget-1.19.5
state: present
- name: delete package
dnf:
name: wget-1-19.5
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected wget to be removed
##### ACTUAL RESULTS
Wget is registered as a package that does not exist and does nothing
```
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:1
ok: [test]
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/yumdnf.py
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uqvwnlbzvsvksqefnqqjojbqzwgxtddn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "", "changed": true, "results": ["Installed: wget-1.19.5-8.el8_1.1.x86_64"], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "present", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [install package] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:5
changed: [test] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: wget-1.19.5-8.el8_1.1.x86_64"
]
}
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-siwumenjqvrisnpnndvvpgvpcvagjbyn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "Nothing to do", "changed": false, "results": [], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "absent", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [delete package] *****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:10
ok: [test] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}
META: ran handlers
META: ran handlers
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************
test : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72809
|
https://github.com/ansible/ansible/pull/73033
|
77942acefcacae5d44d8a5f3b4f8c7228633ca1d
|
44ee04bd1f7d683fce246c16e752ace04d244b4c
| 2020-12-03T01:08:47Z |
python
| 2021-01-07T17:32:06Z |
test/integration/targets/dnf/tasks/dnf.yml
|
# UNINSTALL 'python2-dnf'
# The `dnf` module has the smarts to auto-install the relevant python
# bindings. To test, we will first uninstall python2-dnf (so that the tests
# on python2 will require python2-dnf)
- name: check python2-dnf with rpm
shell: rpm -q python2-dnf
register: rpm_result
ignore_errors: true
args:
warn: no
# Don't uninstall python2-dnf with the `dnf` module in case it needs to load
# some dnf python files after the package is uninstalled.
- name: uninstall python2-dnf with shell
shell: dnf -y remove python2-dnf
when: rpm_result is successful
# UNINSTALL
# With 'python2-dnf' uninstalled, the first call to 'dnf' should install
# python2-dnf.
- name: uninstall sos
dnf:
name: sos
state: removed
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_result
- name: verify uninstallation of sos
assert:
that:
- "not dnf_result.failed | default(False)"
- "rpm_result.rc == 1"
# UNINSTALL AGAIN
- name: uninstall sos
dnf:
name: sos
state: removed
register: dnf_result
- name: verify no change on re-uninstall
assert:
that:
- "not dnf_result.changed"
# INSTALL
- name: install sos (check_mode)
dnf:
name: sos
state: present
update_cache: True
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is success
- dnf_result.results|length > 0
- "dnf_result.results[0].startswith('Installed: ')"
- name: install sos
dnf:
name: sos
state: present
update_cache: True
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_result
- name: verify installation of sos
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_result.rc == 0"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
# INSTALL AGAIN
- name: install sos again (check_mode)
dnf:
name: sos
state: present
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is not changed
- dnf_result.results|length == 0
- name: install sos again
dnf:
name: sos
state: present
register: dnf_result
- name: verify no change on second install
assert:
that:
- "not dnf_result.changed"
# Multiple packages
- name: uninstall sos and dos2unix
dnf: name=sos,dos2unix state=removed
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "rpm_sos_result.rc != 0"
- "rpm_dos2unix_result.rc != 0"
- name: install sos and dos2unix as comma separated
dnf: name=sos,dos2unix state=present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix
dnf: name=sos,dos2unix state=removed
register: dnf_result
- name: install sos and dos2unix as list
dnf:
name:
- sos
- dos2unix
state: present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check dos2unix with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix
dnf:
name: "sos,dos2unix"
state: removed
register: dnf_result
- name: install sos and dos2unix as comma separated with spaces
dnf:
name: "sos, dos2unix"
state: present
register: dnf_result
- name: check sos with rpm
shell: rpm -q sos
failed_when: False
register: rpm_sos_result
- name: check sos with rpm
shell: rpm -q dos2unix
failed_when: False
register: rpm_dos2unix_result
- name: verify packages installed
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_sos_result.rc == 0"
- "rpm_dos2unix_result.rc == 0"
- name: uninstall sos and dos2unix (check_mode)
dnf:
name:
- sos
- dos2unix
state: removed
check_mode: True
register: dnf_result
- assert:
that:
- dnf_result is success
- dnf_result.results|length == 2
- "dnf_result.results[0].startswith('Removed: ')"
- "dnf_result.results[1].startswith('Removed: ')"
- name: uninstall sos and dos2unix
dnf:
name:
- sos
- dos2unix
state: removed
register: dnf_result
- assert:
that:
- dnf_result is changed
- name: install non-existent rpm
dnf:
name: does-not-exist
register: non_existent_rpm
ignore_errors: True
- name: check non-existent rpm install failed
assert:
that:
- non_existent_rpm is failed
# Install in installroot='/'. This should be identical to default
- name: install sos in /
dnf: name=sos state=present installroot='/'
register: dnf_result
- name: check sos with rpm in /
shell: rpm -q sos --root=/
failed_when: False
register: rpm_result
- name: verify installation of sos in /
assert:
that:
- "not dnf_result.failed | default(False)"
- "dnf_result.changed"
- "rpm_result.rc == 0"
- name: verify dnf module outputs in /
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
- name: uninstall sos in /
dnf: name=sos installroot='/'
register: dnf_result
- name: uninstall sos for downloadonly test
dnf:
name: sos
state: absent
- name: Test download_only (check_mode)
dnf:
name: sos
state: latest
download_only: true
check_mode: true
register: dnf_result
- assert:
that:
- dnf_result is success
- "dnf_result.results[0].startswith('Downloaded: ')"
- name: Test download_only
dnf:
name: sos
state: latest
download_only: true
register: dnf_result
- name: verify download of sos (part 1 -- dnf "install" succeeded)
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- name: uninstall sos (noop)
dnf:
name: sos
state: absent
register: dnf_result
- name: verify download of sos (part 2 -- nothing removed during uninstall)
assert:
that:
- "dnf_result is success"
- "not dnf_result is changed"
- name: uninstall sos for downloadonly/downloaddir test
dnf:
name: sos
state: absent
- name: Test download_only/download_dir
dnf:
name: sos
state: latest
download_only: true
download_dir: "/var/tmp/packages"
register: dnf_result
- name: verify dnf output
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- command: "ls /var/tmp/packages"
register: ls_out
- name: Verify specified download_dir was used
assert:
that:
- "'sos' in ls_out.stdout"
# GROUP INSTALL
- name: install Custom Group group
dnf:
name: "@Custom Group"
state: present
register: dnf_result
- name: check dinginessentail with rpm
command: rpm -q dinginessentail
failed_when: False
register: dinginessentail_result
- name: verify installation of the group
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
- dinginessentail_result.rc == 0
- name: install the group again
dnf:
name: "@Custom Group"
state: present
register: dnf_result
- name: verify nothing changed
assert:
that:
- not dnf_result is changed
- "'msg' in dnf_result"
- name: verify that landsidescalping is not installed
dnf:
name: landsidescalping
state: absent
- name: install the group again but also with a package that is not yet installed
dnf:
name:
- "@Custom Group"
- landsidescalping
state: present
register: dnf_result
- name: check landsidescalping with rpm
command: rpm -q landsidescalping
failed_when: False
register: landsidescalping_result
- name: verify landsidescalping is installed
assert:
that:
- dnf_result is changed
- "'results' in dnf_result"
- landsidescalping_result.rc == 0
- name: try to install the group again, with --check to check 'changed'
dnf:
name: "@Custom Group"
state: present
check_mode: yes
register: dnf_result
- name: verify nothing changed
assert:
that:
- not dnf_result is changed
- "'msg' in dnf_result"
- name: remove landsidescalping after test
dnf:
name: landsidescalping
state: absent
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- shell: 'dnf -y group install "Custom Group" && dnf -y group remove "Custom Group"'
register: shell_dnf_result
# GROUP UPGRADE - this will go to the same method as group install
# but through group_update - it is its invocation we're testing here
# see commit 119c9e5d6eb572c4a4800fbe8136095f9063c37b
- name: install latest Custom Group
dnf:
name: "@Custom Group"
state: latest
register: dnf_result
- name: verify installation of the group
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- shell: dnf -y group install "Custom Group" && dnf -y group remove "Custom Group"
- name: try to install non existing group
dnf:
name: "@non-existing-group"
state: present
register: dnf_result
ignore_errors: True
- name: verify installation of the non existing group failed
assert:
that:
- "not dnf_result.changed"
- "dnf_result is failed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
- name: try to install non existing file
dnf:
name: /tmp/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: dnf_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "dnf_result is failed"
- "not dnf_result.changed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
- name: try to install from non existing url
dnf:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/dnf/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: dnf_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "dnf_result is failed"
- "not dnf_result.changed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'msg' in dnf_result"
# ENVIRONMENT UPGRADE
# see commit de299ef77c03a64a8f515033a79ac6b7db1bc710
- name: install Custom Environment Group
dnf:
name: "@Custom Environment Group"
state: latest
register: dnf_result
- name: check landsidescalping with rpm
command: rpm -q landsidescalping
register: landsidescalping_result
- name: verify installation of the environment
assert:
that:
- not dnf_result is failed
- dnf_result is changed
- "'results' in dnf_result"
- landsidescalping_result.rc == 0
# Fedora 28 (DNF 2) does not support this, just remove the package itself
- name: remove landsidescalping package on Fedora 28
dnf:
name: landsidescalping
state: absent
when: ansible_distribution == 'Fedora' and ansible_distribution_major_version|int <= 28
# cleanup until https://github.com/ansible/ansible/issues/27377 is resolved
- name: remove Custom Environment Group
shell: dnf -y group install "Custom Environment Group" && dnf -y group remove "Custom Environment Group"
when: not (ansible_distribution == 'Fedora' and ansible_distribution_major_version|int <= 28)
# https://github.com/ansible/ansible/issues/39704
- name: install non-existent rpm, state=latest
dnf:
name: non-existent-rpm
state: latest
ignore_errors: yes
register: dnf_result
- name: verify the result
assert:
that:
- "dnf_result is failed"
- "'non-existent-rpm' in dnf_result['failures'][0]"
- "'No package non-existent-rpm available' in dnf_result['failures'][0]"
- "'Failed to install some of the specified packages' in dnf_result['msg']"
- name: use latest to install httpd
dnf:
name: httpd
state: latest
register: dnf_result
- name: verify httpd was installed
assert:
that:
- "'changed' in dnf_result"
- name: uninstall httpd
dnf:
name: httpd
state: removed
- name: update httpd only if it exists
dnf:
name: httpd
state: latest
update_only: yes
register: dnf_result
- name: verify httpd not installed
assert:
that:
- "not dnf_result is changed"
- name: try to install not compatible arch rpm, should fail
dnf:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/dnf/banner-1.3.4-3.el7.ppc64le.rpm
state: present
register: dnf_result
ignore_errors: True
- name: verify that dnf failed
assert:
that:
- "not dnf_result is changed"
- "dnf_result is failed"
# setup for testing installing an RPM from url
- set_fact:
pkg_name: fpaste
- name: cleanup
dnf:
name: "{{ pkg_name }}"
state: absent
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/dnf/fpaste-0.3.9.1-1.fc27.noarch.rpm
# setup end
- name: download an rpm
get_url:
url: "{{ pkg_url }}"
dest: "/tmp/{{ pkg_name }}.rpm"
- name: install the downloaded rpm
dnf:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
disable_gpg_check: true
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- name: install the downloaded rpm again
dnf:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "not dnf_result is changed"
- name: clean up
dnf:
name: "{{ pkg_name }}"
state: absent
- name: install from url
dnf:
name: "{{ pkg_url }}"
state: present
disable_gpg_check: true
register: dnf_result
- name: verify installation
assert:
that:
- "dnf_result is success"
- "dnf_result is changed"
- "dnf_result is not failed"
- name: verify dnf module outputs
assert:
that:
- "'changed' in dnf_result"
- "'results' in dnf_result"
- name: Create a temp RPM file which does not contain nevra information
file:
name: "/tmp/non_existent_pkg.rpm"
state: touch
- name: Try installing RPM file which does not contain nevra information
dnf:
name: "/tmp/non_existent_pkg.rpm"
state: present
register: no_nevra_info_result
ignore_errors: yes
- name: Verify RPM failed to install
assert:
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- name: Delete a temp RPM file
file:
name: "/tmp/non_existent_pkg.rpm"
state: absent
- name: uninstall lsof
dnf:
name: lsof
state: removed
- name: check lsof with rpm
shell: rpm -q lsof
ignore_errors: True
register: rpm_lsof_result
- name: verify lsof is uninstalled
assert:
that:
- "rpm_lsof_result is failed"
- name: create conf file that excludes lsof
copy:
content: |
[main]
exclude=lsof*
dest: '{{ output_dir }}/test-dnf.conf'
register: test_dnf_copy
- block:
# begin test case where disable_excludes is supported
- name: Try install lsof without disable_excludes
dnf: name=lsof state=latest conf_file={{ test_dnf_copy.dest }}
register: dnf_lsof_result
ignore_errors: True
- name: verify lsof did not install because it is in exclude list
assert:
that:
- "dnf_lsof_result is failed"
- name: install lsof with disable_excludes
dnf: name=lsof state=latest disable_excludes=all conf_file={{ test_dnf_copy.dest }}
register: dnf_lsof_result_using_excludes
- name: verify lsof did install using disable_excludes=all
assert:
that:
- "dnf_lsof_result_using_excludes is success"
- "dnf_lsof_result_using_excludes is changed"
- "dnf_lsof_result_using_excludes is not failed"
always:
- name: remove exclude lsof conf file
file:
path: '{{ output_dir }}/test-dnf.conf'
state: absent
# end test case where disable_excludes is supported
- name: Test "dnf install /usr/bin/vi"
block:
- name: Clean vim-minimal
dnf:
name: vim-minimal
state: absent
- name: Install vim-minimal by specifying "/usr/bin/vi"
dnf:
name: /usr/bin/vi
state: present
- name: Get rpm output
command: rpm -q vim-minimal
register: rpm_output
- name: Check installation was successful
assert:
that:
- "'vim-minimal' in rpm_output.stdout"
when:
- ansible_distribution == 'Fedora'
- name: Remove wildcard package that isn't installed
dnf:
name: firefox*
state: absent
register: wildcard_absent
- assert:
that:
- wildcard_absent is successful
- wildcard_absent is not changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,809 |
Yum/DNF module does not remove package with version specified
|
##### SUMMARY
When using dnf or yum to remove packages, ansible will not detect package is installed if the version is added to the end
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
yum/dnf
##### ANSIBLE VERSION
```
ansible 2.10.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
```
##### OS / ENVIRONMENT
CentOS Linux release 8.0.1905 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: hosts
become: yes
tasks:
- name: install package
yum:
name: wget-1.19.5
state: present
- name: delete package
dnf:
name: wget-1-19.5
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected wget to be removed
##### ACTUAL RESULTS
Wget is registered as a package that does not exist and does nothing
```
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:1
ok: [test]
META: ran handlers
Including module_utils file ansible/__init__.py
Including module_utils file ansible/module_utils/__init__.py
Including module_utils file ansible/module_utils/_text.py
Including module_utils file ansible/module_utils/basic.py
Including module_utils file ansible/module_utils/common/_collections_compat.py
Including module_utils file ansible/module_utils/common/__init__.py
Including module_utils file ansible/module_utils/common/_json_compat.py
Including module_utils file ansible/module_utils/common/_utils.py
Including module_utils file ansible/module_utils/common/file.py
Including module_utils file ansible/module_utils/common/parameters.py
Including module_utils file ansible/module_utils/common/collections.py
Including module_utils file ansible/module_utils/common/process.py
Including module_utils file ansible/module_utils/common/sys_info.py
Including module_utils file ansible/module_utils/common/text/converters.py
Including module_utils file ansible/module_utils/common/text/__init__.py
Including module_utils file ansible/module_utils/common/text/formatters.py
Including module_utils file ansible/module_utils/common/validation.py
Including module_utils file ansible/module_utils/common/warnings.py
Including module_utils file ansible/module_utils/compat/selectors.py
Including module_utils file ansible/module_utils/compat/__init__.py
Including module_utils file ansible/module_utils/compat/_selectors2.py
Including module_utils file ansible/module_utils/distro/__init__.py
Including module_utils file ansible/module_utils/distro/_distro.py
Including module_utils file ansible/module_utils/parsing/convert_bool.py
Including module_utils file ansible/module_utils/parsing/__init__.py
Including module_utils file ansible/module_utils/pycompat24.py
Including module_utils file ansible/module_utils/six/__init__.py
Including module_utils file ansible/module_utils/urls.py
Including module_utils file ansible/module_utils/yumdnf.py
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-uqvwnlbzvsvksqefnqqjojbqzwgxtddn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "", "changed": true, "results": ["Installed: wget-1.19.5-8.el8_1.1.x86_64"], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "present", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [install package] ****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:5
changed: [test] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "present",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: wget-1.19.5-8.el8_1.1.x86_64"
]
}
Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/dnf.py
Pipelining is enabled.
<test> ESTABLISH SSH CONNECTION FOR USER: None
<test> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=30m)
<test> SSH: ansible_password/ansible_ssh_password not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<test> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<test> SSH: PlayContext set ssh_common_args: ()
<test> SSH: PlayContext set ssh_extra_args: ()
<test> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/user/.ansible/cp/02319c625e)
<test> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/home/user/.ansible/cp/02319c625e test '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-siwumenjqvrisnpnndvvpgvpcvagjbyn ; /usr/libexec/platform-python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<test> (0, b'\n{"msg": "Nothing to do", "changed": false, "results": [], "rc": 0, "invocation": {"module_args": {"name": ["wget-1.19.5"], "state": "absent", "allow_downgrade": false, "autoremove": false, "bugfix": false, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "installroot": "/", "install_repoquery": true, "install_weak_deps": true, "security": false, "skip_broken": false, "update_cache": false, "update_only": false, "validate_certs": true, "lock_timeout": 30, "allowerasing": false, "conf_file": null, "disable_excludes": null, "download_dir": null, "list": null, "releasever": null}}}\n', b'OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017\r\ndebug1: Reading configuration data /home/user/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 58: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 539\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
TASK [delete package] *****************************************************************************************************************************************************************************************************************************************
task path: /ansible_collections/test.yml:10
ok: [test] => {
"changed": false,
"invocation": {
"module_args": {
"allow_downgrade": false,
"allowerasing": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"wget-1.19.5"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "absent",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "Nothing to do",
"rc": 0,
"results": []
}
META: ran handlers
META: ran handlers
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************
test : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/72809
|
https://github.com/ansible/ansible/pull/73033
|
77942acefcacae5d44d8a5f3b4f8c7228633ca1d
|
44ee04bd1f7d683fce246c16e752ace04d244b4c
| 2020-12-03T01:08:47Z |
python
| 2021-01-07T17:32:06Z |
test/integration/targets/dnf/tasks/test_sos_removal.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,104 |
Ansible does not respect NOCOLOR environment variable
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I know you can disable colors in Ansible by setting ```nocolor = true``` in ```ansible.cfg```, but this seems redundant for those that already have colors disabled in their environment (e.g. a colorblind colleague of mine).
Would it be possible to make Ansible respect the ```NO_COLOR``` environment as well, if set?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /opt/ansible/projects/home/ansible.cfg
configured module search path = ['/opt/ansible/projects/home/library/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/opt/ansible/projects/home/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/opt/ansible/projects/home/ansible.cfg) = cache
CACHE_PLUGIN_TIMEOUT(/opt/ansible/projects/home/ansible.cfg) = 300
COLLECTIONS_PATHS(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/collections']
DEFAULT_ACTION_PLUGIN_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/callback'
DEFAULT_FORKS(/opt/ansible/projects/home/ansible.cfg) = 5
DEFAULT_HOST_LIST(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/inventory']
DEFAULT_MANAGED_STR(/opt/ansible/projects/home/ansible.cfg) = This file is managed with Ansible, your changes will be
DEFAULT_MODULE_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/modules']
DEFAULT_REMOTE_USER(/opt/ansible/projects/home/ansible.cfg) = root
DEFAULT_ROLES_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/roles']
DEFAULT_STDOUT_CALLBACK(/opt/ansible/projects/home/ansible.cfg) = community.general.yaml
DEFAULT_VAULT_PASSWORD_FILE(/opt/ansible/projects/home/ansible.cfg) = /opt/ansible/projects/home/.ansible-vault
INTERPRETER_PYTHON(/opt/ansible/projects/home/ansible.cfg) = auto_silent
INVENTORY_ENABLED(/opt/ansible/projects/home/ansible.cfg) = ['community.general.proxmox', 'constructed', 'yaml', 'ini
RETRY_FILES_ENABLED(/opt/ansible/projects/home/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian 10.7
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
export NO_COLOR=true
ansible-playbook playbook.yml
# Colorless output
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```
export NO_COLOR=true
ansible-playbook playbook.yml
# Colored output
|
https://github.com/ansible/ansible/issues/73104
|
https://github.com/ansible/ansible/pull/73105
|
995e76c6e37517b26390e2834dab4fc860c79783
|
b1ee1a285a6c888fa1a334b60fc4acc2ef86027b
| 2021-01-04T08:49:14Z |
python
| 2021-01-07T20:00:31Z |
changelogs/fragments/added_existing_nocolor.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,104 |
Ansible does not respect NOCOLOR environment variable
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I know you can disable colors in Ansible by setting ```nocolor = true``` in ```ansible.cfg```, but this seems redundant for those that already have colors disabled in their environment (e.g. a colorblind colleague of mine).
Would it be possible to make Ansible respect the ```NO_COLOR``` environment as well, if set?
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
config file = /opt/ansible/projects/home/ansible.cfg
configured module search path = ['/opt/ansible/projects/home/library/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/opt/ansible/projects/home/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/opt/ansible/projects/home/ansible.cfg) = cache
CACHE_PLUGIN_TIMEOUT(/opt/ansible/projects/home/ansible.cfg) = 300
COLLECTIONS_PATHS(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/collections']
DEFAULT_ACTION_PLUGIN_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/callback'
DEFAULT_FORKS(/opt/ansible/projects/home/ansible.cfg) = 5
DEFAULT_HOST_LIST(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/inventory']
DEFAULT_MANAGED_STR(/opt/ansible/projects/home/ansible.cfg) = This file is managed with Ansible, your changes will be
DEFAULT_MODULE_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/library/modules']
DEFAULT_REMOTE_USER(/opt/ansible/projects/home/ansible.cfg) = root
DEFAULT_ROLES_PATH(/opt/ansible/projects/home/ansible.cfg) = ['/opt/ansible/projects/home/roles']
DEFAULT_STDOUT_CALLBACK(/opt/ansible/projects/home/ansible.cfg) = community.general.yaml
DEFAULT_VAULT_PASSWORD_FILE(/opt/ansible/projects/home/ansible.cfg) = /opt/ansible/projects/home/.ansible-vault
INTERPRETER_PYTHON(/opt/ansible/projects/home/ansible.cfg) = auto_silent
INVENTORY_ENABLED(/opt/ansible/projects/home/ansible.cfg) = ['community.general.proxmox', 'constructed', 'yaml', 'ini
RETRY_FILES_ENABLED(/opt/ansible/projects/home/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Debian 10.7
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
export NO_COLOR=true
ansible-playbook playbook.yml
# Colorless output
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
```
export NO_COLOR=true
ansible-playbook playbook.yml
# Colored output
|
https://github.com/ansible/ansible/issues/73104
|
https://github.com/ansible/ansible/pull/73105
|
995e76c6e37517b26390e2834dab4fc860c79783
|
b1ee1a285a6c888fa1a334b60fc4acc2ef86027b
| 2021-01-04T08:49:14Z |
python
| 2021-01-07T20:00:31Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env: [{name: ANSIBLE_NOCOLOR}]
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
type: boolean
yaml: {key: plugins.connection.pipelining}
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
COLOR_CHANGED:
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
yaml: {key: display.colors.changed}
COLOR_CONSOLE_PROMPT:
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
yaml: {key: display.colors.debug}
COLOR_DEPRECATE:
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
yaml: {key: display.colors.deprecate}
COLOR_DIFF_ADD:
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices: ["replace", "merge"]
description:
- This setting controls how variables merge in Ansible.
By default Ansible will override variables in specific precedence orders, as described in Variables.
When a variable of higher precedence wins, it will replace the other value.
- "Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged.
This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars
(integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it,
and playbooks in the official examples repos do not use this setting"
- In version 2.0 a ``combine`` filter was added to allow doing this for a particular variable (described in Filters).
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
deprecated:
why: this feature is fragile and not portable, leading to continual confusion and misuse
version: "2.13"
alternatives: the ``combine`` filter explicitly
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,429 |
'use' arg breaks hostname module for RHEL 7 target
|
##### SUMMARY
Ironically, specifying `use: redhat` for the Ansible `hostname` module makes the task fail for a Red Hat control node, while dropping the argument makes it succeed.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
- `hostname` module, `RedHatStrategy`
- https://github.com/ansible/ansible/blob/stable-2.9/lib/ansible/modules/system/hostname.py
##### ANSIBLE VERSION
```text
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Mar 20 2020, 17:08:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```text
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
LOCALHOST_WARNING(env: ANSIBLE_LOCALHOST_WARNING) = False
```
##### OS / ENVIRONMENT
Managed node: RHEL 7.6 AWS VM
Contents of `/etc/os-release`:
```
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
```
Contents of `cat /etc/sysconfig/network`:
```
# Created by cloud-init on instance boot automatically, do not edit.
#
NETWORKING=yes
```
Snippet from `hostnamectl`:
```
Virtualization: xen
Operating System: Red Hat Enterprise Linux Server 7.6 (Maipo)
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.6:GA:server
Kernel: Linux 3.10.0-957.21.3.el7.x86_64
Architecture: x86-64
```
##### STEPS TO REPRODUCE
**BROKEN version**:
```yaml
- name: change hostname
hosts: all
tasks:
- hostname:
name: newhost
use: redhat
become: true
```
**WORKING version** (remove `use`):
```yaml
- name: change hostname
hosts: all
tasks:
- hostname:
name: newhost
become: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
`use` should make things more reliable rather than less.
##### ACTUAL RESULTS
```paste below
TASK [hostname] ********************************
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "Command failed rc=1, out=, err=Unknown operation newhost\n"}
```
|
https://github.com/ansible/ansible/issues/72429
|
https://github.com/ansible/ansible/pull/72444
|
a7e834071c13b49aabbb17886a4be1f79222b994
|
98726ad86c27b4cbd607f7be97ae0f56461fcc03
| 2020-11-01T15:48:42Z |
python
| 2021-01-11T20:10:26Z |
lib/ansible/modules/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname. Supports most OSs/Distributions including those using C(systemd).
- Windows, HP-UX, and AIX are not currently supported.
notes:
- This module does B(NOT) modify C(/etc/hosts). You need to modify it yourself using other modules like M(template) or M(replace).
- On macOS, this module uses C(scutil) to set C(HostName), C(ComputerName), and C(LocalHostName). Since C(LocalHostName)
cannot contain spaces or most special characters, this module will replace characters when setting C(LocalHostName).
- Supports C(check_mode).
options:
name:
description:
- Name of the host.
- If the value is a fully qualified domain name that does not resolve from the given host,
this will cause the module to hang for a few seconds while waiting for the name resolution attempt to timeout.
type: str
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, particularly with containers as they can present misleading information.
choices: ['alpine', 'debian', 'freebsd', 'generic', 'macos', 'macosx', 'darwin', 'openbsd', 'openrc', 'redhat', 'sles', 'solaris', 'systemd']
type: str
version_added: '2.9'
'''
EXAMPLES = '''
- name: Set a hostname
ansible.builtin.hostname:
name: web01
- name: Set a hostname specifying strategy
ansible.builtin.hostname:
name: web01
strategy: systemd
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, text_type
STRATS = {
'alpine': 'Alpine',
'debian': 'Debian',
'freebsd': 'FreeBSD',
'generic': 'Generic',
'macos': 'Darwin',
'macosx': 'Darwin',
'darwin': 'Darwin',
'openbsd': 'OpenBSD',
'openrc': 'OpenRC',
'redhat': 'RedHat',
'sles': 'SLES',
'solaris': 'Solaris',
'systemd': 'Systemd',
}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class GenericStrategy(object):
"""
This is a generic Hostname manipulation strategy class.
A subclass may wish to override some or all of these methods.
- get_current_hostname()
- get_permanent_hostname()
- set_current_hostname(name)
- set_permanent_hostname(name)
"""
def __init__(self, module):
self.module = module
self.changed = False
self.hostname_cmd = self.module.get_bin_path('hostname', True)
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class DebianStrategy(GenericStrategy):
"""
This is a Debian family Hostname manipulation strategy class - it edits
the /etc/hostname file.
"""
HOSTNAME_FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SLESStrategy(GenericStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
HOSTNAME_FILE = '/etc/HOSTNAME'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class RedHatStrategy(GenericStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
finally:
f.close()
if not found:
lines.append("HOSTNAME=%s\n" % name)
f = open(self.NETWORK_FILE, 'w+')
try:
f.writelines(lines)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class AlpineStrategy(GenericStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
HOSTNAME_FILE = '/etc/hostname'
def update_current_and_permanent_hostname(self):
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, '-F', self.HOSTNAME_FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(GenericStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
def __init__(self, module):
super(SystemdStrategy, self).__init__(module)
self.hostname_cmd = self.module.get_bin_path('hostnamectl', True)
def get_current_hostname(self):
cmd = [self.hostname_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostname_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostname_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(GenericStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class OpenBSDStrategy(GenericStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
HOSTNAME_FILE = '/etc/myname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SolarisStrategy(GenericStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(GenericStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/rc.conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("hostname=temporarystub\n")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class DarwinStrategy(GenericStrategy):
"""
This is a macOS hostname manipulation strategy class. It uses
/usr/sbin/scutil to set ComputerName, HostName, and LocalHostName.
HostName corresponds to what most platforms consider to be hostname.
It controls the name used on the command line and SSH.
However, macOS also has LocalHostName and ComputerName settings.
LocalHostName controls the Bonjour/ZeroConf name, used by services
like AirDrop. This class implements a method, _scrub_hostname(), that mimics
the transformations macOS makes on hostnames when enterened in the Sharing
preference pane. It replaces spaces with dashes and removes all special
characters.
ComputerName is the name used for user-facing GUI services, like the
System Preferences/Sharing pane and when users connect to the Mac over the network.
"""
def __init__(self, module):
super(DarwinStrategy, self).__init__(module)
self.scutil = self.module.get_bin_path('scutil', True)
self.name_types = ('HostName', 'ComputerName', 'LocalHostName')
self.scrubbed_name = self._scrub_hostname(self.module.params['name'])
def _make_translation(self, replace_chars, replacement_chars, delete_chars):
if PY3:
return str.maketrans(replace_chars, replacement_chars, delete_chars)
if not isinstance(replace_chars, text_type) or not isinstance(replacement_chars, text_type):
raise ValueError('replace_chars and replacement_chars must both be strings')
if len(replace_chars) != len(replacement_chars):
raise ValueError('replacement_chars must be the same length as replace_chars')
table = dict(zip((ord(c) for c in replace_chars), replacement_chars))
for char in delete_chars:
table[ord(char)] = None
return table
def _scrub_hostname(self, name):
"""
LocalHostName only accepts valid DNS characters while HostName and ComputerName
accept a much wider range of characters. This function aims to mimic how macOS
translates a friendly name to the LocalHostName.
"""
# Replace all these characters with a single dash
name = to_text(name)
replace_chars = u'\'"~`!@#$%^&*(){}[]/=?+\\|-_ '
delete_chars = u".'"
table = self._make_translation(replace_chars, u'-' * len(replace_chars), delete_chars)
name = name.translate(table)
# Replace multiple dashes with a single dash
while '-' * 2 in name:
name = name.replace('-' * 2, '')
name = name.rstrip('-')
return name
def get_current_hostname(self):
cmd = [self.scutil, '--get', 'HostName']
rc, out, err = self.module.run_command(cmd)
if rc != 0 and 'HostName: not set' not in err:
self.module.fail_json(msg="Failed to get current hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def get_permanent_hostname(self):
cmd = [self.scutil, '--get', 'ComputerName']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to get permanent hostname rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
for hostname_type in self.name_types:
cmd = [self.scutil, '--set', hostname_type]
if hostname_type == 'LocalHostName':
cmd.append(to_native(self.scrubbed_name))
else:
cmd.append(to_native(name))
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Failed to set {3} to '{2}': {0} {1}".format(to_native(out), to_native(err), to_native(name), hostname_type))
def set_current_hostname(self, name):
pass
def update_current_hostname(self):
pass
def update_permanent_hostname(self):
name = self.module.params['name']
# Get all the current host name values in the order of self.name_types
all_names = tuple(self.module.run_command([self.scutil, '--get', name_type])[1].strip() for name_type in self.name_types)
# Get the expected host name values based on the order in self.name_types
expected_names = tuple(self.scrubbed_name if n == 'LocalHostName' else name for n in self.name_types)
# Ensure all three names are updated
if all_names != expected_names:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class OpenSUSETumbleweedHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-tumbleweed'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class ManjaroHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro'
strategy_class = SystemdStrategy
class ManjaroARMHostname(Hostname):
platform = 'Linux'
distribution = 'Manjaro-arm'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxserverHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinuxserver'
strategy_class = RedHatStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class AlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class FlatcarHostname(Hostname):
platform = 'Linux'
distribution = 'Flatcar'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = DebianStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = DebianStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = DebianStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = DebianStrategy
class ParrotHostname(Hostname):
platform = 'Linux'
distribution = 'Parrot'
strategy_class = DebianStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = DebianStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = DebianStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = DebianStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = DebianStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = DebianStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = DebianStrategy
class DarwinHostname(Hostname):
platform = 'Darwin'
distribution = None
strategy_class = DarwinStrategy
class OsmcHostname(Hostname):
platform = 'Linux'
distribution = 'Osmc'
strategy_class = SystemdStrategy
class PardusHostname(Hostname):
platform = 'Linux'
distribution = 'Pardus'
strategy_class = SystemdStrategy
class VoidLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Void'
strategy_class = DebianStrategy
class PopHostname(Hostname):
platform = 'Linux'
distribution = 'Pop'
strategy_class = DebianStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
else:
name_before = permanent_hostname
# NOTE: socket.getfqdn() calls gethostbyaddr(socket.gethostname()), which can be
# slow to return if the name does not resolve correctly.
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,027 |
CentOS Stream should be reported as a different distribution from CentOS Linux
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible reports CentOS Linux and CentOS Stream as the same distribution therefore there is no way to differentiate between the distributions in the standard ansible facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.4
config file = /home/aaron/git/astrolinux-bootstrap/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/virtualenv/ansible_py3/lib/python3.6/site-packages/ansible
executable location = /home/xxx/virtualenv/ansible_py3/bin/ansible
python version = 3.6.8 (default, Dec 3 2020, 18:11:24) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS: CentOS Stream 8
##### STEPS TO REPRODUCE
n/a
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
CentOS Stream should be reported as a different distribution from CentOS Linux.
##### ACTUAL RESULTS
n/a
|
https://github.com/ansible/ansible/issues/73027
|
https://github.com/ansible/ansible/pull/73034
|
4164cb26f0ef79eb9744518f0ac0d351cd89bf87
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
| 2020-12-18T22:52:23Z |
python
| 2021-01-13T22:54:04Z |
changelogs/fragments/73027-differentiate-centos-stream.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,027 |
CentOS Stream should be reported as a different distribution from CentOS Linux
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible reports CentOS Linux and CentOS Stream as the same distribution therefore there is no way to differentiate between the distributions in the standard ansible facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.4
config file = /home/aaron/git/astrolinux-bootstrap/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/virtualenv/ansible_py3/lib/python3.6/site-packages/ansible
executable location = /home/xxx/virtualenv/ansible_py3/bin/ansible
python version = 3.6.8 (default, Dec 3 2020, 18:11:24) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS: CentOS Stream 8
##### STEPS TO REPRODUCE
n/a
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
CentOS Stream should be reported as a different distribution from CentOS Linux.
##### ACTUAL RESULTS
n/a
|
https://github.com/ansible/ansible/issues/73027
|
https://github.com/ansible/ansible/pull/73034
|
4164cb26f0ef79eb9744518f0ac0d351cd89bf87
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
| 2020-12-18T22:52:23Z |
python
| 2021-01-13T22:54:04Z |
hacking/tests/gen_distribution_version_testcase.py
|
#!/usr/bin/env python
"""
This script generated test_cases for test_distribution_version.py.
To do so it outputs the relevant files from /etc/*release, the output of distro.linux_distribution()
and the current ansible_facts regarding the distribution version.
This assumes a working ansible version in the path.
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os.path
import platform
import subprocess
import sys
from ansible.module_utils import distro
from ansible.module_utils._text import to_text
filelist = [
'/etc/oracle-release',
'/etc/slackware-version',
'/etc/redhat-release',
'/etc/vmware-release',
'/etc/openwrt_release',
'/etc/system-release',
'/etc/alpine-release',
'/etc/release',
'/etc/arch-release',
'/etc/os-release',
'/etc/SuSE-release',
'/etc/gentoo-release',
'/etc/os-release',
'/etc/lsb-release',
'/etc/altlinux-release',
'/etc/os-release',
'/etc/coreos/update.conf',
'/etc/flatcar/update.conf',
'/usr/lib/os-release',
]
fcont = {}
for f in filelist:
if os.path.exists(f):
s = os.path.getsize(f)
if s > 0 and s < 10000:
with open(f) as fh:
fcont[f] = fh.read()
dist = distro.linux_distribution(full_distribution_name=False)
facts = ['distribution', 'distribution_version', 'distribution_release', 'distribution_major_version', 'os_family']
try:
b_ansible_out = subprocess.check_output(
['ansible', 'localhost', '-m', 'setup'])
except subprocess.CalledProcessError as e:
print("ERROR: ansible run failed, output was: \n")
print(e.output)
sys.exit(e.returncode)
ansible_out = to_text(b_ansible_out)
parsed = json.loads(ansible_out[ansible_out.index('{'):])
ansible_facts = {}
for fact in facts:
try:
ansible_facts[fact] = parsed['ansible_facts']['ansible_' + fact]
except Exception:
ansible_facts[fact] = "N/A"
nicename = ansible_facts['distribution'] + ' ' + ansible_facts['distribution_version']
output = {
'name': nicename,
'distro': {
'codename': distro.codename(),
'id': distro.id(),
'name': distro.name(),
'version': distro.version(),
'version_best': distro.version(best=True),
'lsb_release_info': distro.lsb_release_info(),
'os_release_info': distro.os_release_info(),
},
'input': fcont,
'platform.dist': dist,
'result': ansible_facts,
}
system = platform.system()
if system != 'Linux':
output['platform.system'] = system
release = platform.release()
if release:
output['platform.release'] = release
print(json.dumps(output, indent=4))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,027 |
CentOS Stream should be reported as a different distribution from CentOS Linux
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible reports CentOS Linux and CentOS Stream as the same distribution therefore there is no way to differentiate between the distributions in the standard ansible facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.4
config file = /home/aaron/git/astrolinux-bootstrap/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/virtualenv/ansible_py3/lib/python3.6/site-packages/ansible
executable location = /home/xxx/virtualenv/ansible_py3/bin/ansible
python version = 3.6.8 (default, Dec 3 2020, 18:11:24) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS: CentOS Stream 8
##### STEPS TO REPRODUCE
n/a
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
CentOS Stream should be reported as a different distribution from CentOS Linux.
##### ACTUAL RESULTS
n/a
|
https://github.com/ansible/ansible/issues/73027
|
https://github.com/ansible/ansible/pull/73034
|
4164cb26f0ef79eb9744518f0ac0d351cd89bf87
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
| 2020-12-18T22:52:23Z |
python
| 2021-01-13T22:54:04Z |
lib/ansible/module_utils/facts/system/distribution.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import platform
import re
from ansible.module_utils.common.sys_info import get_distribution, get_distribution_version, \
get_distribution_codename
from ansible.module_utils.facts.utils import get_file_content
from ansible.module_utils.facts.collector import BaseFactCollector
def get_uname(module, flags=('-v')):
if isinstance(flags, str):
flags = flags.split()
command = ['uname']
command.extend(flags)
rc, out, err = module.run_command(command)
if rc == 0:
return out
return None
def _file_exists(path, allow_empty=False):
# not finding the file, exit early
if not os.path.exists(path):
return False
# if just the path needs to exists (ie, it can be empty) we are done
if allow_empty:
return True
# file exists but is empty and we dont allow_empty
if os.path.getsize(path) == 0:
return False
# file exists with some content
return True
class DistributionFiles:
'''has-a various distro file parsers (os-release, etc) and logic for finding the right one.'''
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
# keep names in sync with Conditionals page of docs
OSDIST_LIST = (
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'Archlinux'},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT',
'SMGL': 'Source Mage GNU/Linux',
}
# We can't include this in SEARCH_STRING because a name match on its keys
# causes a fallback to using the first whitespace separated item from the file content
# as the name. For os-release, that is in form 'NAME=Arch'
OS_RELEASE_ALIAS = {
'Archlinux': 'Arch Linux'
}
STRIP_QUOTES = r'\'\"\\'
def __init__(self, module):
self.module = module
def _get_file_content(self, path):
return get_file_content(path)
def _get_dist_file_content(self, path, allow_empty=False):
# cant find that dist file or it is incorrectly empty
if not _file_exists(path, allow_empty=allow_empty):
return False, None
data = self._get_file_content(path)
return True, data
def _parse_dist_file(self, name, dist_file_content, path, collected_facts):
dist_file_dict = {}
dist_file_content = dist_file_content.strip(DistributionFiles.STRIP_QUOTES)
if name in self.SEARCH_STRING:
# look for the distribution string in the data and replace according to RELEASE_NAME_MAP
# only the distribution name is set, the version is assumed to be correct from distro.linux_distribution()
if self.SEARCH_STRING[name] in dist_file_content:
# this sets distribution=RedHat if 'Red Hat' shows up in data
dist_file_dict['distribution'] = name
dist_file_dict['distribution_file_search_string'] = self.SEARCH_STRING[name]
else:
# this sets distribution to what's in the data, e.g. CentOS, Scientific, ...
dist_file_dict['distribution'] = dist_file_content.split()[0]
return True, dist_file_dict
if name in self.OS_RELEASE_ALIAS:
if self.OS_RELEASE_ALIAS[name] in dist_file_content:
dist_file_dict['distribution'] = name
return True, dist_file_dict
return False, dist_file_dict
# call a dedicated function for parsing the file content
# TODO: replace with a map or a class
try:
# FIXME: most of these dont actually look at the dist file contents, but random other stuff
distfunc_name = 'parse_distribution_file_' + name
distfunc = getattr(self, distfunc_name)
parsed, dist_file_dict = distfunc(name, dist_file_content, path, collected_facts)
return parsed, dist_file_dict
except AttributeError as exc:
self.module.debug('exc: %s' % exc)
# this should never happen, but if it does fail quietly and not with a traceback
return False, dist_file_dict
return True, dist_file_dict
# to debug multiple matching release files, one can use:
# self.facts['distribution_debug'].append({path + ' ' + name:
# (parsed,
# self.facts['distribution'],
# self.facts['distribution_version'],
# self.facts['distribution_release'],
# )})
def _guess_distribution(self):
# try to find out which linux distribution this is
dist = (get_distribution(), get_distribution_version(), get_distribution_codename())
distribution_guess = {
'distribution': dist[0] or 'NA',
'distribution_version': dist[1] or 'NA',
# distribution_release can be the empty string
'distribution_release': 'NA' if dist[2] is None else dist[2]
}
distribution_guess['distribution_major_version'] = distribution_guess['distribution_version'].split('.')[0] or 'NA'
return distribution_guess
def process_dist_files(self):
# Try to handle the exceptions now ...
# self.facts['distribution_debug'] = []
dist_file_facts = {}
dist_guess = self._guess_distribution()
dist_file_facts.update(dist_guess)
for ddict in self.OSDIST_LIST:
name = ddict['name']
path = ddict['path']
allow_empty = ddict.get('allowempty', False)
has_dist_file, dist_file_content = self._get_dist_file_content(path, allow_empty=allow_empty)
# but we allow_empty. For example, ArchLinux with an empty /etc/arch-release and a
# /etc/os-release with a different name
if has_dist_file and allow_empty:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
dist_file_facts['distribution_file_variety'] = name
break
if not has_dist_file:
# keep looking
continue
parsed_dist_file, parsed_dist_file_facts = self._parse_dist_file(name, dist_file_content, path, dist_file_facts)
# finally found the right os dist file and were able to parse it
if parsed_dist_file:
dist_file_facts['distribution'] = name
dist_file_facts['distribution_file_path'] = path
# distribution and file_variety are the same here, but distribution
# will be changed/mapped to a more specific name.
# ie, dist=Fedora, file_variety=RedHat
dist_file_facts['distribution_file_variety'] = name
dist_file_facts['distribution_file_parsed'] = parsed_dist_file
dist_file_facts.update(parsed_dist_file_facts)
break
return dist_file_facts
# TODO: FIXME: split distro file parsing into its own module or class
def parse_distribution_file_Slackware(self, name, data, path, collected_facts):
slackware_facts = {}
if 'Slackware' not in data:
return False, slackware_facts # TODO: remove
slackware_facts['distribution'] = name
version = re.findall(r'\w+[.]\w+\+?', data)
if version:
slackware_facts['distribution_version'] = version[0]
return True, slackware_facts
def parse_distribution_file_Amazon(self, name, data, path, collected_facts):
amazon_facts = {}
if 'Amazon' not in data:
return False, amazon_facts
amazon_facts['distribution'] = 'Amazon'
version = [n for n in data.split() if n.isdigit()]
version = version[0] if version else 'NA'
amazon_facts['distribution_version'] = version
return True, amazon_facts
def parse_distribution_file_OpenWrt(self, name, data, path, collected_facts):
openwrt_facts = {}
if 'OpenWrt' not in data:
return False, openwrt_facts # TODO: remove
openwrt_facts['distribution'] = name
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
openwrt_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
openwrt_facts['distribution_release'] = release.groups()[0]
return True, openwrt_facts
def parse_distribution_file_Alpine(self, name, data, path, collected_facts):
alpine_facts = {}
alpine_facts['distribution'] = 'Alpine'
alpine_facts['distribution_version'] = data
return True, alpine_facts
def parse_distribution_file_SUSE(self, name, data, path, collected_facts):
suse_facts = {}
if 'suse' not in data.lower():
return False, suse_facts # TODO: remove if tested without this
if path == '/etc/os-release':
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution:
suse_facts['distribution'] = distribution.group(1).strip('"')
# example pattern are 13.04 13.0 13
distribution_version = re.search(r'^VERSION_ID="?([0-9]+\.?[0-9]*)"?', line)
if distribution_version:
suse_facts['distribution_version'] = distribution_version.group(1)
suse_facts['distribution_major_version'] = distribution_version.group(1).split('.')[0]
if 'open' in data.lower():
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release:
suse_facts['distribution_release'] = release.groups()[0]
elif 'enterprise' in data.lower() and 'VERSION_ID' in line:
# SLES doesn't got funny release names
release = re.search(r'^VERSION_ID="?[0-9]+\.?([0-9]*)"?', line)
if release.group(1):
release = release.group(1)
else:
release = "0" # no minor number, so it is the first release
suse_facts['distribution_release'] = release
elif path == '/etc/SuSE-release':
if 'open' in data.lower():
data = data.splitlines()
distdata = get_file_content(path).splitlines()[0]
suse_facts['distribution'] = distdata.split()[0]
for line in data:
release = re.search('CODENAME *= *([^\n]+)', line)
if release:
suse_facts['distribution_release'] = release.groups()[0].strip()
elif 'enterprise' in data.lower():
lines = data.splitlines()
distribution = lines[0].split()[0]
if "Server" in data:
suse_facts['distribution'] = "SLES"
elif "Desktop" in data:
suse_facts['distribution'] = "SLED"
for line in lines:
release = re.search('PATCHLEVEL = ([0-9]+)', line) # SLES doesn't got funny release names
if release:
suse_facts['distribution_release'] = release.group(1)
suse_facts['distribution_version'] = collected_facts['distribution_version'] + '.' + release.group(1)
# See https://www.suse.com/support/kb/doc/?id=000019341 for SLES for SAP
if os.path.islink('/etc/products.d/baseproduct') and os.path.realpath('/etc/products.d/baseproduct').endswith('SLES_SAP.prod'):
suse_facts['distribution'] = 'SLES_SAP'
return True, suse_facts
def parse_distribution_file_Debian(self, name, data, path, collected_facts):
debian_facts = {}
if 'Debian' in data or 'Raspbian' in data:
debian_facts['distribution'] = 'Debian'
release = re.search(r"PRETTY_NAME=[^(]+ \(?([^)]+?)\)", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
# Last resort: try to find release from tzdata as either lsb is missing or this is very old debian
if collected_facts['distribution_release'] == 'NA' and 'Debian' in data:
dpkg_cmd = self.module.get_bin_path('dpkg')
if dpkg_cmd:
cmd = "%s --status tzdata|grep Provides|cut -f2 -d'-'" % dpkg_cmd
rc, out, err = self.module.run_command(cmd)
if rc == 0:
debian_facts['distribution_release'] = out.strip()
elif 'Ubuntu' in data:
debian_facts['distribution'] = 'Ubuntu'
# nothing else to do, Ubuntu gets correct info from python functions
elif 'SteamOS' in data:
debian_facts['distribution'] = 'SteamOS'
# nothing else to do, SteamOS gets correct info from python functions
elif path in ('/etc/lsb-release', '/etc/os-release') and ('Kali' in data or 'Parrot' in data):
if 'Kali' in data:
# Kali does not provide /etc/lsb-release anymore
debian_facts['distribution'] = 'Kali'
elif 'Parrot' in data:
debian_facts['distribution'] = 'Parrot'
release = re.search('DISTRIB_RELEASE=(.*)', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif 'Devuan' in data:
debian_facts['distribution'] = 'Devuan'
release = re.search(r"PRETTY_NAME=\"?[^(\"]+ \(?([^) \"]+)\)?", data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1)
elif 'Cumulus' in data:
debian_facts['distribution'] = 'Cumulus Linux'
version = re.search(r"VERSION_ID=(.*)", data)
if version:
major, _minor, _dummy_ver = version.group(1).split(".")
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = major
release = re.search(r'VERSION="(.*)"', data)
if release:
debian_facts['distribution_release'] = release.groups()[0]
elif "Mint" in data:
debian_facts['distribution'] = 'Linux Mint'
version = re.search(r"VERSION_ID=\"(.*)\"", data)
if version:
debian_facts['distribution_version'] = version.group(1)
debian_facts['distribution_major_version'] = version.group(1).split('.')[0]
else:
return False, debian_facts
return True, debian_facts
def parse_distribution_file_Mandriva(self, name, data, path, collected_facts):
mandriva_facts = {}
if 'Mandriva' in data:
mandriva_facts['distribution'] = 'Mandriva'
version = re.search('DISTRIB_RELEASE="(.*)"', data)
if version:
mandriva_facts['distribution_version'] = version.groups()[0]
release = re.search('DISTRIB_CODENAME="(.*)"', data)
if release:
mandriva_facts['distribution_release'] = release.groups()[0]
mandriva_facts['distribution'] = name
else:
return False, mandriva_facts
return True, mandriva_facts
def parse_distribution_file_NA(self, name, data, path, collected_facts):
na_facts = {}
for line in data.splitlines():
distribution = re.search("^NAME=(.*)", line)
if distribution and name == 'NA':
na_facts['distribution'] = distribution.group(1).strip('"')
version = re.search("^VERSION=(.*)", line)
if version and collected_facts['distribution_version'] == 'NA':
na_facts['distribution_version'] = version.group(1).strip('"')
return True, na_facts
def parse_distribution_file_Coreos(self, name, data, path, collected_facts):
coreos_facts = {}
# FIXME: pass in ro copy of facts for this kind of thing
distro = get_distribution()
if distro.lower() == 'coreos':
if not data:
# include fix from #15230, #15228
# TODO: verify this is ok for above bugs
return False, coreos_facts
release = re.search("^GROUP=(.*)", data)
if release:
coreos_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, coreos_facts # TODO: remove if tested without this
return True, coreos_facts
def parse_distribution_file_Flatcar(self, name, data, path, collected_facts):
flatcar_facts = {}
distro = get_distribution()
if distro.lower() == 'flatcar':
if not data:
return False, flatcar_facts
release = re.search("^GROUP=(.*)", data)
if release:
flatcar_facts['distribution_release'] = release.group(1).strip('"')
else:
return False, flatcar_facts
return True, flatcar_facts
def parse_distribution_file_ClearLinux(self, name, data, path, collected_facts):
clear_facts = {}
if "clearlinux" not in name.lower():
return False, clear_facts
pname = re.search('NAME="(.*)"', data)
if pname:
if 'Clear Linux' not in pname.groups()[0]:
return False, clear_facts
clear_facts['distribution'] = pname.groups()[0]
version = re.search('VERSION_ID=(.*)', data)
if version:
clear_facts['distribution_major_version'] = version.groups()[0]
clear_facts['distribution_version'] = version.groups()[0]
release = re.search('ID=(.*)', data)
if release:
clear_facts['distribution_release'] = release.groups()[0]
return True, clear_facts
class Distribution(object):
"""
This subclass of Facts fills the distribution, distribution_version and distribution_release variables
To do so it checks the existence and content of typical files in /etc containing distribution information
This is unit tested. Please extend the tests to cover all distributions if you have them available.
"""
# every distribution name mentioned here, must have one of
# - allowempty == True
# - be listed in SEARCH_STRING
# - have a function get_distribution_DISTNAME implemented
OSDIST_LIST = (
{'path': '/etc/oracle-release', 'name': 'OracleLinux'},
{'path': '/etc/slackware-version', 'name': 'Slackware'},
{'path': '/etc/redhat-release', 'name': 'RedHat'},
{'path': '/etc/vmware-release', 'name': 'VMwareESX', 'allowempty': True},
{'path': '/etc/openwrt_release', 'name': 'OpenWrt'},
{'path': '/etc/system-release', 'name': 'Amazon'},
{'path': '/etc/alpine-release', 'name': 'Alpine'},
{'path': '/etc/arch-release', 'name': 'Archlinux', 'allowempty': True},
{'path': '/etc/os-release', 'name': 'SUSE'},
{'path': '/etc/SuSE-release', 'name': 'SUSE'},
{'path': '/etc/gentoo-release', 'name': 'Gentoo'},
{'path': '/etc/os-release', 'name': 'Debian'},
{'path': '/etc/lsb-release', 'name': 'Mandriva'},
{'path': '/etc/altlinux-release', 'name': 'Altlinux'},
{'path': '/etc/sourcemage-release', 'name': 'SMGL'},
{'path': '/usr/lib/os-release', 'name': 'ClearLinux'},
{'path': '/etc/coreos/update.conf', 'name': 'Coreos'},
{'path': '/etc/flatcar/update.conf', 'name': 'Flatcar'},
{'path': '/etc/os-release', 'name': 'NA'},
)
SEARCH_STRING = {
'OracleLinux': 'Oracle Linux',
'RedHat': 'Red Hat',
'Altlinux': 'ALT Linux',
'ClearLinux': 'Clear Linux Software for Intel Architecture',
'SMGL': 'Source Mage GNU/Linux',
}
# keep keys in sync with Conditionals page of docs
OS_FAMILY_MAP = {'RedHat': ['RedHat', 'Fedora', 'CentOS', 'Scientific', 'SLC',
'Ascendos', 'CloudLinux', 'PSBM', 'OracleLinux', 'OVS',
'OEL', 'Amazon', 'Virtuozzo', 'XenServer', 'Alibaba',
'EulerOS', 'openEuler'],
'Debian': ['Debian', 'Ubuntu', 'Raspbian', 'Neon', 'KDE neon',
'Linux Mint', 'SteamOS', 'Devuan', 'Kali', 'Cumulus Linux',
'Pop!_OS', 'Parrot', 'Pardus GNU/Linux'],
'Suse': ['SuSE', 'SLES', 'SLED', 'openSUSE', 'openSUSE Tumbleweed',
'SLES_SAP', 'SUSE_LINUX', 'openSUSE Leap'],
'Archlinux': ['Archlinux', 'Antergos', 'Manjaro'],
'Mandrake': ['Mandrake', 'Mandriva'],
'Solaris': ['Solaris', 'Nexenta', 'OmniOS', 'OpenIndiana', 'SmartOS'],
'Slackware': ['Slackware'],
'Altlinux': ['Altlinux'],
'SGML': ['SGML'],
'Gentoo': ['Gentoo', 'Funtoo'],
'Alpine': ['Alpine'],
'AIX': ['AIX'],
'HP-UX': ['HPUX'],
'Darwin': ['MacOSX'],
'FreeBSD': ['FreeBSD', 'TrueOS'],
'ClearLinux': ['Clear Linux OS', 'Clear Linux Mix'],
'DragonFly': ['DragonflyBSD', 'DragonFlyBSD', 'Gentoo/DragonflyBSD', 'Gentoo/DragonFlyBSD'],
'NetBSD': ['NetBSD'], }
OS_FAMILY = {}
for family, names in OS_FAMILY_MAP.items():
for name in names:
OS_FAMILY[name] = family
def __init__(self, module):
self.module = module
def get_distribution_facts(self):
distribution_facts = {}
# The platform module provides information about the running
# system/distribution. Use this as a baseline and fix buggy systems
# afterwards
system = platform.system()
distribution_facts['distribution'] = system
distribution_facts['distribution_release'] = platform.release()
distribution_facts['distribution_version'] = platform.version()
systems_implemented = ('AIX', 'HP-UX', 'Darwin', 'FreeBSD', 'OpenBSD', 'SunOS', 'DragonFly', 'NetBSD')
if system in systems_implemented:
cleanedname = system.replace('-', '')
distfunc = getattr(self, 'get_distribution_' + cleanedname)
dist_func_facts = distfunc()
distribution_facts.update(dist_func_facts)
elif system == 'Linux':
distribution_files = DistributionFiles(module=self.module)
# linux_distribution_facts = LinuxDistribution(module).get_distribution_facts()
dist_file_facts = distribution_files.process_dist_files()
distribution_facts.update(dist_file_facts)
distro = distribution_facts['distribution']
# look for a os family alias for the 'distribution', if there isnt one, use 'distribution'
distribution_facts['os_family'] = self.OS_FAMILY.get(distro, None) or distro
return distribution_facts
def get_distribution_AIX(self):
aix_facts = {}
rc, out, err = self.module.run_command("/usr/bin/oslevel")
data = out.split('.')
aix_facts['distribution_major_version'] = data[0]
if len(data) > 1:
aix_facts['distribution_version'] = '%s.%s' % (data[0], data[1])
aix_facts['distribution_release'] = data[1]
else:
aix_facts['distribution_version'] = data[0]
return aix_facts
def get_distribution_HPUX(self):
hpux_facts = {}
rc, out, err = self.module.run_command(r"/usr/sbin/swlist |egrep 'HPUX.*OE.*[AB].[0-9]+\.[0-9]+'", use_unsafe_shell=True)
data = re.search(r'HPUX.*OE.*([AB].[0-9]+\.[0-9]+)\.([0-9]+).*', out)
if data:
hpux_facts['distribution_version'] = data.groups()[0]
hpux_facts['distribution_release'] = data.groups()[1]
return hpux_facts
def get_distribution_Darwin(self):
darwin_facts = {}
darwin_facts['distribution'] = 'MacOSX'
rc, out, err = self.module.run_command("/usr/bin/sw_vers -productVersion")
data = out.split()[-1]
if data:
darwin_facts['distribution_major_version'] = data.split('.')[0]
darwin_facts['distribution_version'] = data
return darwin_facts
def get_distribution_FreeBSD(self):
freebsd_facts = {}
freebsd_facts['distribution_release'] = platform.release()
data = re.search(r'(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT|RC|PRERELEASE).*', freebsd_facts['distribution_release'])
if 'trueos' in platform.version():
freebsd_facts['distribution'] = 'TrueOS'
if data:
freebsd_facts['distribution_major_version'] = data.group(1)
freebsd_facts['distribution_version'] = '%s.%s' % (data.group(1), data.group(2))
return freebsd_facts
def get_distribution_OpenBSD(self):
openbsd_facts = {}
openbsd_facts['distribution_version'] = platform.release()
rc, out, err = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'OpenBSD\s[0-9]+.[0-9]+-(\S+)\s.*', out)
if match:
openbsd_facts['distribution_release'] = match.groups()[0]
else:
openbsd_facts['distribution_release'] = 'release'
return openbsd_facts
def get_distribution_DragonFly(self):
dragonfly_facts = {
'distribution_release': platform.release()
}
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.search(r'v(\d+)\.(\d+)\.(\d+)-(RELEASE|STABLE|CURRENT).*', out)
if match:
dragonfly_facts['distribution_major_version'] = match.group(1)
dragonfly_facts['distribution_version'] = '%s.%s.%s' % match.groups()[:3]
return dragonfly_facts
def get_distribution_NetBSD(self):
netbsd_facts = {}
platform_release = platform.release()
netbsd_facts['distribution_release'] = platform_release
rc, out, dummy = self.module.run_command("/sbin/sysctl -n kern.version")
match = re.match(r'NetBSD\s(\d+)\.(\d+)\s\((GENERIC)\).*', out)
if match:
netbsd_facts['distribution_major_version'] = match.group(1)
netbsd_facts['distribution_version'] = '%s.%s' % match.groups()[:2]
else:
netbsd_facts['distribution_major_version'] = platform_release.split('.')[0]
netbsd_facts['distribution_version'] = platform_release
return netbsd_facts
def get_distribution_SMGL(self):
smgl_facts = {}
smgl_facts['distribution'] = 'Source Mage GNU/Linux'
return smgl_facts
def get_distribution_SunOS(self):
sunos_facts = {}
data = get_file_content('/etc/release').splitlines()[0]
if 'Solaris' in data:
# for solaris 10 uname_r will contain 5.10, for solaris 11 it will have 5.11
uname_r = get_uname(self.module, flags=['-r'])
ora_prefix = ''
if 'Oracle Solaris' in data:
data = data.replace('Oracle ', '')
ora_prefix = 'Oracle '
sunos_facts['distribution'] = data.split()[0]
sunos_facts['distribution_version'] = data.split()[1]
sunos_facts['distribution_release'] = ora_prefix + data
sunos_facts['distribution_major_version'] = uname_r.split('.')[1].rstrip()
return sunos_facts
uname_v = get_uname(self.module, flags=['-v'])
distribution_version = None
if 'SmartOS' in data:
sunos_facts['distribution'] = 'SmartOS'
if _file_exists('/etc/product'):
product_data = dict([l.split(': ', 1) for l in get_file_content('/etc/product').splitlines() if ': ' in l])
if 'Image' in product_data:
distribution_version = product_data.get('Image').split()[-1]
elif 'OpenIndiana' in data:
sunos_facts['distribution'] = 'OpenIndiana'
elif 'OmniOS' in data:
sunos_facts['distribution'] = 'OmniOS'
distribution_version = data.split()[-1]
elif uname_v is not None and 'NexentaOS_' in uname_v:
sunos_facts['distribution'] = 'Nexenta'
distribution_version = data.split()[-1].lstrip('v')
if sunos_facts.get('distribution', '') in ('SmartOS', 'OpenIndiana', 'OmniOS', 'Nexenta'):
sunos_facts['distribution_release'] = data.strip()
if distribution_version is not None:
sunos_facts['distribution_version'] = distribution_version
elif uname_v is not None:
sunos_facts['distribution_version'] = uname_v.splitlines()[0].strip()
return sunos_facts
return sunos_facts
class DistributionFactCollector(BaseFactCollector):
name = 'distribution'
_fact_ids = set(['distribution_version',
'distribution_release',
'distribution_major_version',
'os_family'])
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
facts_dict = {}
if not module:
return facts_dict
distribution = Distribution(module=module)
distro_facts = distribution.get_distribution_facts()
return distro_facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,027 |
CentOS Stream should be reported as a different distribution from CentOS Linux
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible reports CentOS Linux and CentOS Stream as the same distribution therefore there is no way to differentiate between the distributions in the standard ansible facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.4
config file = /home/aaron/git/astrolinux-bootstrap/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/virtualenv/ansible_py3/lib/python3.6/site-packages/ansible
executable location = /home/xxx/virtualenv/ansible_py3/bin/ansible
python version = 3.6.8 (default, Dec 3 2020, 18:11:24) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS: CentOS Stream 8
##### STEPS TO REPRODUCE
n/a
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
CentOS Stream should be reported as a different distribution from CentOS Linux.
##### ACTUAL RESULTS
n/a
|
https://github.com/ansible/ansible/issues/73027
|
https://github.com/ansible/ansible/pull/73034
|
4164cb26f0ef79eb9744518f0ac0d351cd89bf87
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
| 2020-12-18T22:52:23Z |
python
| 2021-01-13T22:54:04Z |
test/units/module_utils/facts/system/distribution/fixtures/centos_8_1.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,027 |
CentOS Stream should be reported as a different distribution from CentOS Linux
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Ansible reports CentOS Linux and CentOS Stream as the same distribution therefore there is no way to differentiate between the distributions in the standard ansible facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.4
config file = /home/aaron/git/astrolinux-bootstrap/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/virtualenv/ansible_py3/lib/python3.6/site-packages/ansible
executable location = /home/xxx/virtualenv/ansible_py3/bin/ansible
python version = 3.6.8 (default, Dec 3 2020, 18:11:24) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS: CentOS Stream 8
##### STEPS TO REPRODUCE
n/a
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
CentOS Stream should be reported as a different distribution from CentOS Linux.
##### ACTUAL RESULTS
n/a
|
https://github.com/ansible/ansible/issues/73027
|
https://github.com/ansible/ansible/pull/73034
|
4164cb26f0ef79eb9744518f0ac0d351cd89bf87
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
| 2020-12-18T22:52:23Z |
python
| 2021-01-13T22:54:04Z |
test/units/module_utils/facts/system/distribution/fixtures/centos_stream_8.json
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,042 |
Not able to suppress warning from the plugin pause
|
##### SUMMARY
After the merge of [pause - do not hang if run in the background](https://github.com/ansible/ansible/pull/72065) into stable-2.10 we are getting a lot of (useless) warnings saying "[Not waiting for response to prompt as stdin is not interactive](https://github.com/ansible/ansible/blob/791d3fc25ae12e7faf068013fbc611a5249daa9c/lib/ansible/plugins/action/pause.py#L234)".
This is really annoying because we are monitoring for all warnings in Ansible, to make sure everything is working as expected.
Can we somehow have an option to suppress this warning? Just like we have with almost any other warning in Ansible because we know that "stdin is not interactive".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
The component is the plugin [pause](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/pause.py)
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/data/ansible/ansible-patchmgmt/ansible.cfg) = True
DEFAULT_CALLBACK_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/plugins/callbacks']
DEFAULT_FILTER_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/filter_plugins']
DEFAULT_FORKS(/data/ansible/ansible-patchmgmt/ansible.cfg) = 50
DEFAULT_MANAGED_STR(/data/ansible/ansible-patchmgmt/ansible.cfg) = Managed by Ansible
DEFAULT_MODULE_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/library']
DEFAULT_TIMEOUT(/data/ansible/ansible-patchmgmt/ansible.cfg) = 300
HOST_KEY_CHECKING(/data/ansible/ansible-patchmgmt/ansible.cfg) = False
INTERPRETER_PYTHON(/data/ansible/ansible-patchmgmt/ansible.cfg) = python3
TRANSFORM_INVALID_GROUP_CHARS(/data/ansible/ansible-patchmgmt/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
Ubuntu 18.04+
##### STEPS TO REPRODUCE
This will produce the problem
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1"
```
##### EXPECTED RESULTS
I expected not to get the warning
```
[WARNING]: Not waiting for response to prompt as stdin is not interactive
```
or being able to suppress the warning with something like `warn=false`
```
sleep 0 | ansible localhost -m pause -a "seconds=1 warn=false"
```
or maybe a variable in ansible.cfg, it is not really that import how I am able to suppress the warning as long it is possible.
##### ACTUAL RESULTS
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1" -vvvv
```
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
Using /data/ansible/ansible-patchmgmt/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /data/ansible/venv/lib/python3.6/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
[WARNING]: Not waiting for response to prompt as stdin is not interactive
localhost | SUCCESS => {
"changed": false,
"delta": 1,
"echo": true,
"rc": 0,
"start": "2020-12-21 16:02:39.725712",
"stderr": "",
"stdout": "Paused for 1.0 seconds",
"stop": "2020-12-21 16:02:40.725892",
"user_input": ""
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/73042
|
https://github.com/ansible/ansible/pull/73182
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
|
0e6c334115976e1df5de7765131d0ccdf01624bf
| 2020-12-21T15:05:04Z |
python
| 2021-01-14T14:35:39Z |
changelogs/fragments/pause-do-not-warn-background-with-seconds.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,042 |
Not able to suppress warning from the plugin pause
|
##### SUMMARY
After the merge of [pause - do not hang if run in the background](https://github.com/ansible/ansible/pull/72065) into stable-2.10 we are getting a lot of (useless) warnings saying "[Not waiting for response to prompt as stdin is not interactive](https://github.com/ansible/ansible/blob/791d3fc25ae12e7faf068013fbc611a5249daa9c/lib/ansible/plugins/action/pause.py#L234)".
This is really annoying because we are monitoring for all warnings in Ansible, to make sure everything is working as expected.
Can we somehow have an option to suppress this warning? Just like we have with almost any other warning in Ansible because we know that "stdin is not interactive".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
The component is the plugin [pause](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/pause.py)
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/data/ansible/ansible-patchmgmt/ansible.cfg) = True
DEFAULT_CALLBACK_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/plugins/callbacks']
DEFAULT_FILTER_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/filter_plugins']
DEFAULT_FORKS(/data/ansible/ansible-patchmgmt/ansible.cfg) = 50
DEFAULT_MANAGED_STR(/data/ansible/ansible-patchmgmt/ansible.cfg) = Managed by Ansible
DEFAULT_MODULE_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/library']
DEFAULT_TIMEOUT(/data/ansible/ansible-patchmgmt/ansible.cfg) = 300
HOST_KEY_CHECKING(/data/ansible/ansible-patchmgmt/ansible.cfg) = False
INTERPRETER_PYTHON(/data/ansible/ansible-patchmgmt/ansible.cfg) = python3
TRANSFORM_INVALID_GROUP_CHARS(/data/ansible/ansible-patchmgmt/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
Ubuntu 18.04+
##### STEPS TO REPRODUCE
This will produce the problem
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1"
```
##### EXPECTED RESULTS
I expected not to get the warning
```
[WARNING]: Not waiting for response to prompt as stdin is not interactive
```
or being able to suppress the warning with something like `warn=false`
```
sleep 0 | ansible localhost -m pause -a "seconds=1 warn=false"
```
or maybe a variable in ansible.cfg, it is not really that import how I am able to suppress the warning as long it is possible.
##### ACTUAL RESULTS
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1" -vvvv
```
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
Using /data/ansible/ansible-patchmgmt/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /data/ansible/venv/lib/python3.6/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
[WARNING]: Not waiting for response to prompt as stdin is not interactive
localhost | SUCCESS => {
"changed": false,
"delta": 1,
"echo": true,
"rc": 0,
"start": "2020-12-21 16:02:39.725712",
"stderr": "",
"stdout": "Paused for 1.0 seconds",
"stop": "2020-12-21 16:02:40.725892",
"user_input": ""
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/73042
|
https://github.com/ansible/ansible/pull/73182
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
|
0e6c334115976e1df5de7765131d0ccdf01624bf
| 2020-12-21T15:05:04Z |
python
| 2021-01-14T14:35:39Z |
lib/ansible/plugins/action/pause.py
|
# Copyright 2012, Tim Bielawa <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import datetime
import signal
import sys
import termios
import time
import tty
from os import (
getpgrp,
isatty,
tcgetpgrp,
)
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import PY3
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
try:
import curses
# Nest the try except since curses.error is not available if curses did not import
try:
curses.setupterm()
HAS_CURSES = True
except (curses.error, TypeError):
HAS_CURSES = False
except ImportError:
HAS_CURSES = False
if HAS_CURSES:
MOVE_TO_BOL = curses.tigetstr('cr')
CLEAR_TO_EOL = curses.tigetstr('el')
else:
MOVE_TO_BOL = b'\r'
CLEAR_TO_EOL = b'\x1b[K'
class AnsibleTimeoutExceeded(Exception):
pass
def timeout_handler(signum, frame):
raise AnsibleTimeoutExceeded
def clear_line(stdout):
stdout.write(b'\x1b[%s' % MOVE_TO_BOL)
stdout.write(b'\x1b[%s' % CLEAR_TO_EOL)
def is_interactive(fd=None):
if fd is None:
return False
if isatty(fd):
# Compare the current process group to the process group associated
# with terminal of the given file descriptor to determine if the process
# is running in the background.
return getpgrp() == tcgetpgrp(fd)
else:
return False
class ActionModule(ActionBase):
''' pauses execution for a length or time, or until input is received '''
BYPASS_HOST_LOOP = True
_VALID_ARGS = frozenset(('echo', 'minutes', 'prompt', 'seconds'))
def run(self, tmp=None, task_vars=None):
''' run the pause action module '''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
duration_unit = 'minutes'
prompt = None
seconds = None
echo = True
echo_prompt = ''
result.update(dict(
changed=False,
rc=0,
stderr='',
stdout='',
start=None,
stop=None,
delta=None,
echo=echo
))
# Should keystrokes be echoed to stdout?
if 'echo' in self._task.args:
try:
echo = boolean(self._task.args['echo'])
except TypeError as e:
result['failed'] = True
result['msg'] = to_native(e)
return result
# Add a note saying the output is hidden if echo is disabled
if not echo:
echo_prompt = ' (output is hidden)'
# Is 'prompt' a key in 'args'?
if 'prompt' in self._task.args:
prompt = "[%s]\n%s%s:" % (self._task.get_name().strip(), self._task.args['prompt'], echo_prompt)
else:
# If no custom prompt is specified, set a default prompt
prompt = "[%s]\n%s%s:" % (self._task.get_name().strip(), 'Press enter to continue, Ctrl+C to interrupt', echo_prompt)
# Are 'minutes' or 'seconds' keys that exist in 'args'?
if 'minutes' in self._task.args or 'seconds' in self._task.args:
try:
if 'minutes' in self._task.args:
# The time() command operates in seconds so we need to
# recalculate for minutes=X values.
seconds = int(self._task.args['minutes']) * 60
else:
seconds = int(self._task.args['seconds'])
duration_unit = 'seconds'
except ValueError as e:
result['failed'] = True
result['msg'] = u"non-integer value given for prompt duration:\n%s" % to_text(e)
return result
########################################################################
# Begin the hard work!
start = time.time()
result['start'] = to_text(datetime.datetime.now())
result['user_input'] = b''
stdin_fd = None
old_settings = None
try:
if seconds is not None:
if seconds < 1:
seconds = 1
# setup the alarm handler
signal.signal(signal.SIGALRM, timeout_handler)
signal.alarm(seconds)
# show the timer and control prompts
display.display("Pausing for %d seconds%s" % (seconds, echo_prompt))
display.display("(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)\r"),
# show the prompt specified in the task
if 'prompt' in self._task.args:
display.display(prompt)
else:
display.display(prompt)
# save the attributes on the existing (duped) stdin so
# that we can restore them later after we set raw mode
stdin_fd = None
stdout_fd = None
try:
if PY3:
stdin = self._connection._new_stdin.buffer
stdout = sys.stdout.buffer
else:
stdin = self._connection._new_stdin
stdout = sys.stdout
stdin_fd = stdin.fileno()
stdout_fd = stdout.fileno()
except (ValueError, AttributeError):
# ValueError: someone is using a closed file descriptor as stdin
# AttributeError: someone is using a null file descriptor as stdin on windoze
stdin = None
interactive = is_interactive(stdin_fd)
if interactive:
# grab actual Ctrl+C sequence
try:
intr = termios.tcgetattr(stdin_fd)[6][termios.VINTR]
except Exception:
# unsupported/not present, use default
intr = b'\x03' # value for Ctrl+C
# get backspace sequences
try:
backspace = termios.tcgetattr(stdin_fd)[6][termios.VERASE]
except Exception:
backspace = [b'\x7f', b'\x08']
old_settings = termios.tcgetattr(stdin_fd)
tty.setraw(stdin_fd)
# Only set stdout to raw mode if it is a TTY. This is needed when redirecting
# stdout to a file since a file cannot be set to raw mode.
if isatty(stdout_fd):
tty.setraw(stdout_fd)
# Only echo input if no timeout is specified
if not seconds and echo:
new_settings = termios.tcgetattr(stdin_fd)
new_settings[3] = new_settings[3] | termios.ECHO
termios.tcsetattr(stdin_fd, termios.TCSANOW, new_settings)
# flush the buffer to make sure no previous key presses
# are read in below
termios.tcflush(stdin, termios.TCIFLUSH)
while True:
if not interactive:
display.warning("Not waiting for response to prompt as stdin is not interactive")
if seconds is not None:
# Give the signal handler enough time to timeout
time.sleep(seconds + 1)
break
try:
key_pressed = stdin.read(1)
if key_pressed == intr: # value for Ctrl+C
clear_line(stdout)
raise KeyboardInterrupt
# read key presses and act accordingly
if key_pressed in (b'\r', b'\n'):
clear_line(stdout)
break
elif key_pressed in backspace:
# delete a character if backspace is pressed
result['user_input'] = result['user_input'][:-1]
clear_line(stdout)
if echo:
stdout.write(result['user_input'])
stdout.flush()
else:
result['user_input'] += key_pressed
except KeyboardInterrupt:
signal.alarm(0)
display.display("Press 'C' to continue the play or 'A' to abort \r"),
if self._c_or_a(stdin):
clear_line(stdout)
break
clear_line(stdout)
raise AnsibleError('user requested abort!')
except AnsibleTimeoutExceeded:
# this is the exception we expect when the alarm signal
# fires, so we simply ignore it to move into the cleanup
pass
finally:
# cleanup and save some information
# restore the old settings for the duped stdin stdin_fd
if not(None in (stdin_fd, old_settings)) and isatty(stdin_fd):
termios.tcsetattr(stdin_fd, termios.TCSADRAIN, old_settings)
duration = time.time() - start
result['stop'] = to_text(datetime.datetime.now())
result['delta'] = int(duration)
if duration_unit == 'minutes':
duration = round(duration / 60.0, 2)
else:
duration = round(duration, 2)
result['stdout'] = "Paused for %s %s" % (duration, duration_unit)
result['user_input'] = to_text(result['user_input'], errors='surrogate_or_strict')
return result
def _c_or_a(self, stdin):
while True:
key_pressed = stdin.read(1)
if key_pressed.lower() == b'a':
return False
elif key_pressed.lower() == b'c':
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,042 |
Not able to suppress warning from the plugin pause
|
##### SUMMARY
After the merge of [pause - do not hang if run in the background](https://github.com/ansible/ansible/pull/72065) into stable-2.10 we are getting a lot of (useless) warnings saying "[Not waiting for response to prompt as stdin is not interactive](https://github.com/ansible/ansible/blob/791d3fc25ae12e7faf068013fbc611a5249daa9c/lib/ansible/plugins/action/pause.py#L234)".
This is really annoying because we are monitoring for all warnings in Ansible, to make sure everything is working as expected.
Can we somehow have an option to suppress this warning? Just like we have with almost any other warning in Ansible because we know that "stdin is not interactive".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
The component is the plugin [pause](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/pause.py)
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/data/ansible/ansible-patchmgmt/ansible.cfg) = True
DEFAULT_CALLBACK_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/plugins/callbacks']
DEFAULT_FILTER_PLUGIN_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/filter_plugins']
DEFAULT_FORKS(/data/ansible/ansible-patchmgmt/ansible.cfg) = 50
DEFAULT_MANAGED_STR(/data/ansible/ansible-patchmgmt/ansible.cfg) = Managed by Ansible
DEFAULT_MODULE_PATH(/data/ansible/ansible-patchmgmt/ansible.cfg) = ['/data/ansible/ansible-patchmgmt/library']
DEFAULT_TIMEOUT(/data/ansible/ansible-patchmgmt/ansible.cfg) = 300
HOST_KEY_CHECKING(/data/ansible/ansible-patchmgmt/ansible.cfg) = False
INTERPRETER_PYTHON(/data/ansible/ansible-patchmgmt/ansible.cfg) = python3
TRANSFORM_INVALID_GROUP_CHARS(/data/ansible/ansible-patchmgmt/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
Ubuntu 18.04+
##### STEPS TO REPRODUCE
This will produce the problem
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1"
```
##### EXPECTED RESULTS
I expected not to get the warning
```
[WARNING]: Not waiting for response to prompt as stdin is not interactive
```
or being able to suppress the warning with something like `warn=false`
```
sleep 0 | ansible localhost -m pause -a "seconds=1 warn=false"
```
or maybe a variable in ansible.cfg, it is not really that import how I am able to suppress the warning as long it is possible.
##### ACTUAL RESULTS
```bash
sleep 0 | ansible localhost -m pause -a "seconds=1" -vvvv
```
```
ansible 2.10.4
config file = /data/ansible/ansible-patchmgmt/ansible.cfg
configured module search path = ['/data/ansible/ansible-patchmgmt/library']
ansible python module location = /data/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /data/ansible/venv/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
Using /data/ansible/ansible-patchmgmt/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin minimal of type stdout, v2.0 from /data/ansible/venv/lib/python3.6/site-packages/ansible/plugins/callback/minimal.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
Pausing for 1 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
[WARNING]: Not waiting for response to prompt as stdin is not interactive
localhost | SUCCESS => {
"changed": false,
"delta": 1,
"echo": true,
"rc": 0,
"start": "2020-12-21 16:02:39.725712",
"stderr": "",
"stdout": "Paused for 1.0 seconds",
"stop": "2020-12-21 16:02:40.725892",
"user_input": ""
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/73042
|
https://github.com/ansible/ansible/pull/73182
|
7f0eb7ad799e531a8fbe5cc4f46046a4b1aeb093
|
0e6c334115976e1df5de7765131d0ccdf01624bf
| 2020-12-21T15:05:04Z |
python
| 2021-01-14T14:35:39Z |
test/integration/targets/pause/runme.sh
|
#!/usr/bin/env bash
set -eux
ANSIBLE_ROLES_PATH=../ ansible-playbook setup.yml
# Test pause module when no tty and non-interactive. This is to prevent playbooks
# from hanging in cron and Tower jobs.
/usr/bin/env bash << EOF
ansible-playbook test-pause-no-tty.yml 2>&1 | \
grep '\[WARNING\]: Not waiting for response to prompt as stdin is not interactive' && {
echo 'Successfully skipped pause in no TTY mode' >&2
exit 0
} || {
echo 'Failed to skip pause module' >&2
exit 1
}
EOF
# Test redirecting stdout
# Issue #41717
ansible-playbook pause-3.yml > /dev/null \
&& echo "Successfully redirected stdout" \
|| echo "Failure when attempting to redirect stdout"
# Test pause with seconds and minutes specified
ansible-playbook test-pause.yml "$@"
# Interactively test pause
python test-pause.py "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,167 |
Virtualization detection fails for Bhyve guests
|
##### SUMMARY
When i collect facts from FreeBSD bhyve target, with the command : ansible xxx -m setup | grep "ansible_virtualization_type"
the variable "ansible_virtualization_type" and "ansible_virtualization_role" is "NA".
Result of dmidecode -s system-product-name
```
dmidecode -s system-product-name
BHYVE
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module_utils/facts/virtual/linux.py
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
OS Target : Debian 10 in BHYVE hardware hosted on TrueNAS.
##### STEPS TO REPRODUCE
Create a VM in bhyve host, run the commande `ansible -m setup`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "bhyve",
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
|
https://github.com/ansible/ansible/issues/73167
|
https://github.com/ansible/ansible/pull/73204
|
0e6c334115976e1df5de7765131d0ccdf01624bf
|
df451636e74fe6f0021a5555392f84c2bf194432
| 2021-01-09T11:13:11Z |
python
| 2021-01-14T15:53:03Z |
changelogs/fragments/73167-bhyve-facts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,167 |
Virtualization detection fails for Bhyve guests
|
##### SUMMARY
When i collect facts from FreeBSD bhyve target, with the command : ansible xxx -m setup | grep "ansible_virtualization_type"
the variable "ansible_virtualization_type" and "ansible_virtualization_role" is "NA".
Result of dmidecode -s system-product-name
```
dmidecode -s system-product-name
BHYVE
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module_utils/facts/virtual/linux.py
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
OS Target : Debian 10 in BHYVE hardware hosted on TrueNAS.
##### STEPS TO REPRODUCE
Create a VM in bhyve host, run the commande `ansible -m setup`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "bhyve",
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
|
https://github.com/ansible/ansible/issues/73167
|
https://github.com/ansible/ansible/pull/73204
|
0e6c334115976e1df5de7765131d0ccdf01624bf
|
df451636e74fe6f0021a5555392f84c2bf194432
| 2021-01-09T11:13:11Z |
python
| 2021-01-14T15:53:03Z |
lib/ansible/module_utils/facts/virtual/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import re
from ansible.module_utils.facts.virtual.base import Virtual, VirtualCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines
class LinuxVirtual(Virtual):
"""
This is a Linux-specific subclass of Virtual. It defines
- virtualization_type
- virtualization_role
"""
platform = 'Linux'
# For more information, check: http://people.redhat.com/~rjones/virt-what/
def get_virtual_facts(self):
virtual_facts = {}
# We want to maintain compatibility with the old "virtualization_type"
# and "virtualization_role" entries, so we need to track if we found
# them. We won't return them until the end, but if we found them early,
# we should avoid updating them again.
found_virt = False
# But as we go along, we also want to track virt tech the new way.
host_tech = set()
guest_tech = set()
# lxc/docker
if os.path.exists('/proc/1/cgroup'):
for line in get_file_lines('/proc/1/cgroup'):
if re.search(r'/docker(/|-[0-9a-f]+\.scope)', line):
guest_tech.add('docker')
if not found_virt:
virtual_facts['virtualization_type'] = 'docker'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/lxc/', line) or re.search('/machine.slice/machine-lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('/system.slice/containerd.service', line):
guest_tech.add('containerd')
if not found_virt:
virtual_facts['virtualization_type'] = 'containerd'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# lxc does not always appear in cgroups anymore but sets 'container=lxc' environment var, requires root privs
if os.path.exists('/proc/1/environ'):
for line in get_file_lines('/proc/1/environ', line_sep='\x00'):
if re.search('container=lxc', line):
guest_tech.add('lxc')
if not found_virt:
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('container=podman', line):
guest_tech.add('podman')
if not found_virt:
virtual_facts['virtualization_type'] = 'podman'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if re.search('^container=.', line):
guest_tech.add('container')
if not found_virt:
virtual_facts['virtualization_type'] = 'container'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/vz') and not os.path.exists('/proc/lve'):
virtual_facts['virtualization_type'] = 'openvz'
if os.path.exists('/proc/bc'):
host_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('openvz')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
systemd_container = get_file_content('/run/systemd/container')
if systemd_container:
guest_tech.add(systemd_container)
if not found_virt:
virtual_facts['virtualization_type'] = systemd_container
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# ensure 'container' guest_tech is appropriately set
if guest_tech.intersection(set(['docker', 'lxc', 'podman', 'openvz', 'containerd'])) or systemd_container:
guest_tech.add('container')
if os.path.exists("/proc/xen"):
is_xen_host = False
try:
for line in get_file_lines('/proc/xen/capabilities'):
if "control_d" in line:
is_xen_host = True
except IOError:
pass
if is_xen_host:
host_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'host'
else:
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# assume guest for this block
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name')
if product_name in ('KVM', 'KVM Server', 'Bochs', 'AHV'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if product_name == 'RHEV Hypervisor':
guest_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
found_virt = True
if product_name in ('VMware Virtual Platform', 'VMware7,1'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
found_virt = True
if product_name in ('OpenStack Compute', 'OpenStack Nova'):
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor')
if bios_vendor == 'Xen':
guest_tech.add('xen')
if not found_virt:
virtual_facts['virtualization_type'] = 'xen'
found_virt = True
if bios_vendor == 'innotek GmbH':
guest_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
found_virt = True
if bios_vendor in ('Amazon EC2', 'DigitalOcean', 'Hetzner'):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor')
KVM_SYS_VENDORS = ('QEMU', 'oVirt', 'Amazon EC2', 'DigitalOcean', 'Google', 'Scaleway', 'Nutanix')
if sys_vendor in KVM_SYS_VENDORS:
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
found_virt = True
if sys_vendor == 'KubeVirt':
guest_tech.add('KubeVirt')
if not found_virt:
virtual_facts['virtualization_type'] = 'KubeVirt'
found_virt = True
# FIXME: This does also match hyperv
if sys_vendor == 'Microsoft Corporation':
guest_tech.add('VirtualPC')
if not found_virt:
virtual_facts['virtualization_type'] = 'VirtualPC'
found_virt = True
if sys_vendor == 'Parallels Software International Inc.':
guest_tech.add('parallels')
if not found_virt:
virtual_facts['virtualization_type'] = 'parallels'
found_virt = True
if sys_vendor == 'OpenStack Foundation':
guest_tech.add('openstack')
if not found_virt:
virtual_facts['virtualization_type'] = 'openstack'
found_virt = True
# unassume guest
if not found_virt:
del virtual_facts['virtualization_role']
if os.path.exists('/proc/self/status'):
for line in get_file_lines('/proc/self/status'):
if re.match(r'^VxID:\s+\d+', line):
if not found_virt:
virtual_facts['virtualization_type'] = 'linux_vserver'
if re.match(r'^VxID:\s+0', line):
host_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'host'
else:
guest_tech.add('linux_vserver')
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/proc/cpuinfo'):
for line in get_file_lines('/proc/cpuinfo'):
if re.match('^model name.*QEMU Virtual CPU', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*User Mode Linux', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^model name.*UML', line):
guest_tech.add('uml')
if not found_virt:
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^machine.*CHRP IBM pSeries .emulated by qemu.', line):
guest_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*PowerVM Lx86', line):
guest_tech.add('powervm_lx86')
if not found_virt:
virtual_facts['virtualization_type'] = 'powervm_lx86'
elif re.match('^vendor_id.*IBM/S390', line):
guest_tech.add('PR/SM')
if not found_virt:
virtual_facts['virtualization_type'] = 'PR/SM'
lscpu = self.module.get_bin_path('lscpu')
if lscpu:
rc, out, err = self.module.run_command(["lscpu"])
if rc == 0:
for line in out.splitlines():
data = line.split(":", 1)
key = data[0].strip()
if key == 'Hypervisor':
tech = data[1].strip()
guest_tech.add(tech)
if not found_virt:
virtual_facts['virtualization_type'] = tech
else:
guest_tech.add('ibm_systemz')
if not found_virt:
virtual_facts['virtualization_type'] = 'ibm_systemz'
else:
continue
if virtual_facts['virtualization_type'] == 'PR/SM':
if not found_virt:
virtual_facts['virtualization_role'] = 'LPAR'
else:
if not found_virt:
virtual_facts['virtualization_role'] = 'guest'
if not found_virt:
found_virt = True
# Beware that we can have both kvm and virtualbox running on a single system
if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK):
modules = []
for line in get_file_lines("/proc/modules"):
data = line.split(" ", 1)
modules.append(data[0])
if 'kvm' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
if os.path.isdir('/rhev/'):
# Check whether this is a RHEV hypervisor (is vdsm running ?)
for f in glob.glob('/proc/[0-9]*/comm'):
try:
with open(f) as virt_fh:
comm_content = virt_fh.read().rstrip()
if comm_content in ('vdsm', 'vdsmd'):
# We add both kvm and RHEV to host_tech in this case.
# It's accurate. RHEV uses KVM.
host_tech.add('RHEV')
if not found_virt:
virtual_facts['virtualization_type'] = 'RHEV'
break
except Exception:
pass
found_virt = True
if 'vboxdrv' in modules:
host_tech.add('virtualbox')
if not found_virt:
virtual_facts['virtualization_type'] = 'virtualbox'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
if 'virtio' in modules:
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
# In older Linux Kernel versions, /sys filesystem is not available
# dmidecode is the safest option to parse virtualization related values
dmi_bin = self.module.get_bin_path('dmidecode')
# We still want to continue even if dmidecode is not available
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s system-product-name' % dmi_bin)
if rc == 0:
# Strip out commented lines (specific dmidecode output)
vendor_name = ''.join([line.strip() for line in out.splitlines() if not line.startswith('#')])
if vendor_name.startswith('VMware'):
guest_tech.add('VMware')
if not found_virt:
virtual_facts['virtualization_type'] = 'VMware'
virtual_facts['virtualization_role'] = 'guest'
found_virt = True
if os.path.exists('/dev/kvm'):
host_tech.add('kvm')
if not found_virt:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
found_virt = True
# If none of the above matches, return 'NA' for virtualization_type
# and virtualization_role. This allows for proper grouping.
if not found_virt:
virtual_facts['virtualization_type'] = 'NA'
virtual_facts['virtualization_role'] = 'NA'
found_virt = True
virtual_facts['virtualization_tech_guest'] = guest_tech
virtual_facts['virtualization_tech_host'] = host_tech
return virtual_facts
class LinuxVirtualCollector(VirtualCollector):
_fact_class = LinuxVirtual
_platform = 'Linux'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,167 |
Virtualization detection fails for Bhyve guests
|
##### SUMMARY
When i collect facts from FreeBSD bhyve target, with the command : ansible xxx -m setup | grep "ansible_virtualization_type"
the variable "ansible_virtualization_type" and "ansible_virtualization_role" is "NA".
Result of dmidecode -s system-product-name
```
dmidecode -s system-product-name
BHYVE
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module_utils/facts/virtual/linux.py
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
OS Target : Debian 10 in BHYVE hardware hosted on TrueNAS.
##### STEPS TO REPRODUCE
Create a VM in bhyve host, run the commande `ansible -m setup`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "bhyve",
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
|
https://github.com/ansible/ansible/issues/73167
|
https://github.com/ansible/ansible/pull/73204
|
0e6c334115976e1df5de7765131d0ccdf01624bf
|
df451636e74fe6f0021a5555392f84c2bf194432
| 2021-01-09T11:13:11Z |
python
| 2021-01-14T15:53:03Z |
test/units/module_utils/facts/virtual/__init__.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,167 |
Virtualization detection fails for Bhyve guests
|
##### SUMMARY
When i collect facts from FreeBSD bhyve target, with the command : ansible xxx -m setup | grep "ansible_virtualization_type"
the variable "ansible_virtualization_type" and "ansible_virtualization_role" is "NA".
Result of dmidecode -s system-product-name
```
dmidecode -s system-product-name
BHYVE
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module_utils/facts/virtual/linux.py
##### ANSIBLE VERSION
```
ansible 2.10.4
config file = /srv/ansible/ansible.cfg
configured module search path = ['/srv/ansible/library']
ansible python module location = /srv/ansible-venv/lib/python3.7/site-packages/ansible
executable location = /srv/ansible-venv/bin/ansible
python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
```
##### CONFIGURATION
```
ANSIBLE_FORCE_COLOR(/srv/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(/srv/ansible/ansible.cfg) = ['/srv/ansible/site.host']
DEFAULT_JINJA2_EXTENSIONS(/srv/ansible/ansible.cfg) = jinja2.ext.do
DEFAULT_MANAGED_STR(/srv/ansible/ansible.cfg) = Gestion par Ansible: {file} modifie le %Y-%m-%d %H:%M:%S par {uid} depuis {host}
DEFAULT_MODULE_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/library']
DEFAULT_ROLES_PATH(/srv/ansible/ansible.cfg) = ['/srv/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/srv/ansible/ansible.cfg) = yaml
DISPLAY_SKIPPED_HOSTS(/srv/ansible/ansible.cfg) = False
INTERPRETER_PYTHON(/srv/ansible/ansible.cfg) = auto_silent
RETRY_FILES_ENABLED(/srv/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
OS Target : Debian 10 in BHYVE hardware hosted on TrueNAS.
##### STEPS TO REPRODUCE
Create a VM in bhyve host, run the commande `ansible -m setup`
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "bhyve",
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
"ansible_virtualization_role": "NA",
"ansible_virtualization_type": "NA",
|
https://github.com/ansible/ansible/issues/73167
|
https://github.com/ansible/ansible/pull/73204
|
0e6c334115976e1df5de7765131d0ccdf01624bf
|
df451636e74fe6f0021a5555392f84c2bf194432
| 2021-01-09T11:13:11Z |
python
| 2021-01-14T15:53:03Z |
test/units/module_utils/facts/virtual/test_linux.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,618 |
Encrypt ansible-vault text while typing
|
##### SUMMARY
Customer request: When typing the string to encrypt, it shows up as plain text. Since the string is being encrypted, it's safe to assume that it's sensitive (e.g. passwords) and should therefore be hidden. At the very least, maybe this could be a command line option. If it is hidden, I'd prefer to be forced to enter it twice to avoid mismatch. To mitigate this issue, I am currently using "stty -echo", but that shouldn't be necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-vault
|
https://github.com/ansible/ansible/issues/71618
|
https://github.com/ansible/ansible/pull/73263
|
bc60d8ccda7a5a5bf0776c83f76c52663378b59c
|
823c72bcb59a5628c0ce21f2145f37f61bae6db9
| 2020-09-03T13:32:44Z |
python
| 2021-01-20T20:50:24Z |
changelogs/fragments/73263-shadow-encrypt-string.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,618 |
Encrypt ansible-vault text while typing
|
##### SUMMARY
Customer request: When typing the string to encrypt, it shows up as plain text. Since the string is being encrypted, it's safe to assume that it's sensitive (e.g. passwords) and should therefore be hidden. At the very least, maybe this could be a command line option. If it is hidden, I'd prefer to be forced to enter it twice to avoid mismatch. To mitigate this issue, I am currently using "stty -echo", but that shouldn't be necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-vault
|
https://github.com/ansible/ansible/issues/71618
|
https://github.com/ansible/ansible/pull/73263
|
bc60d8ccda7a5a5bf0776c83f76c52663378b59c
|
823c72bcb59a5628c0ce21f2145f37f61bae6db9
| 2020-09-03T13:32:44Z |
python
| 2021-01-20T20:50:24Z |
lib/ansible/cli/vault.py
|
# (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleOptionsError
from ansible.module_utils._text import to_text, to_bytes
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import VaultEditor, VaultLib, match_encrypt_secret
from ansible.utils.display import Display
display = Display()
class VaultCLI(CLI):
''' can encrypt any structured data file used by Ansible.
This can include *group_vars/* or *host_vars/* inventory variables,
variables loaded by *include_vars* or *vars_files*, or variable files
passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*.
Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault.
If you'd like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
'''
FROM_STDIN = "stdin"
FROM_ARGS = "the command line args"
FROM_PROMPT = "the interactive prompt"
def __init__(self, args):
self.b_vault_pass = None
self.b_new_vault_pass = None
self.encrypt_string_read_stdin = False
self.encrypt_secret = None
self.encrypt_vault_id = None
self.new_encrypt_secret = None
self.new_encrypt_vault_id = None
super(VaultCLI, self).__init__(args)
def init_parser(self):
super(VaultCLI, self).init_parser(
desc="encryption/decryption utility for Ansible data files",
epilog="\nSee '%s <command> --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0])
)
common = opt_help.argparse.ArgumentParser(add_help=False)
opt_help.add_vault_options(common)
opt_help.add_verbosity_options(common)
subparsers = self.parser.add_subparsers(dest='action')
subparsers.required = True
output = opt_help.argparse.ArgumentParser(add_help=False)
output.add_argument('--output', default=None, dest='output_file',
help='output file name for encrypt or decrypt; use - for stdout',
type=opt_help.unfrack_path())
# For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting
vault_id = opt_help.argparse.ArgumentParser(add_help=False)
vault_id.add_argument('--encrypt-vault-id', default=[], dest='encrypt_vault_id',
action='store', type=str,
help='the vault id used to encrypt (required if more than one vault-id is provided)')
create_parser = subparsers.add_parser('create', help='Create new vault encrypted file', parents=[vault_id, common])
create_parser.set_defaults(func=self.execute_create)
create_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
decrypt_parser = subparsers.add_parser('decrypt', help='Decrypt vault encrypted file', parents=[output, common])
decrypt_parser.set_defaults(func=self.execute_decrypt)
decrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
edit_parser = subparsers.add_parser('edit', help='Edit vault encrypted file', parents=[vault_id, common])
edit_parser.set_defaults(func=self.execute_edit)
edit_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
view_parser = subparsers.add_parser('view', help='View vault encrypted file', parents=[common])
view_parser.set_defaults(func=self.execute_view)
view_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
encrypt_parser = subparsers.add_parser('encrypt', help='Encrypt YAML file', parents=[common, output, vault_id])
encrypt_parser.set_defaults(func=self.execute_encrypt)
encrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
enc_str_parser = subparsers.add_parser('encrypt_string', help='Encrypt a string', parents=[common, output, vault_id])
enc_str_parser.set_defaults(func=self.execute_encrypt_string)
enc_str_parser.add_argument('args', help='String to encrypt', metavar='string_to_encrypt', nargs='*')
enc_str_parser.add_argument('-p', '--prompt', dest='encrypt_string_prompt',
action='store_true',
help="Prompt for the string to encrypt")
enc_str_parser.add_argument('-n', '--name', dest='encrypt_string_names',
action='append',
help="Specify the variable name")
enc_str_parser.add_argument('--stdin-name', dest='encrypt_string_stdin_name',
default=None,
help="Specify the variable name for stdin")
rekey_parser = subparsers.add_parser('rekey', help='Re-key a vault encrypted file', parents=[common, vault_id])
rekey_parser.set_defaults(func=self.execute_rekey)
rekey_new_group = rekey_parser.add_mutually_exclusive_group()
rekey_new_group.add_argument('--new-vault-password-file', default=None, dest='new_vault_password_file',
help="new vault password file for rekey", type=opt_help.unfrack_path())
rekey_new_group.add_argument('--new-vault-id', default=None, dest='new_vault_id', type=str,
help='the new vault identity to use for rekey')
rekey_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
def post_process_args(self, options):
options = super(VaultCLI, self).post_process_args(options)
display.verbosity = options.verbosity
if options.vault_ids:
for vault_id in options.vault_ids:
if u';' in vault_id:
raise AnsibleOptionsError("'%s' is not a valid vault id. The character ';' is not allowed in vault ids" % vault_id)
if getattr(options, 'output_file', None) and len(options.args) > 1:
raise AnsibleOptionsError("At most one input file may be used with the --output option")
if options.action == 'encrypt_string':
if '-' in options.args or not options.args or options.encrypt_string_stdin_name:
self.encrypt_string_read_stdin = True
# TODO: prompting from stdin and reading from stdin seem mutually exclusive, but verify that.
if options.encrypt_string_prompt and self.encrypt_string_read_stdin:
raise AnsibleOptionsError('The --prompt option is not supported if also reading input from stdin')
return options
def run(self):
super(VaultCLI, self).run()
loader = DataLoader()
# set default restrictive umask
old_umask = os.umask(0o077)
vault_ids = list(context.CLIARGS['vault_ids'])
# there are 3 types of actions, those that just 'read' (decrypt, view) and only
# need to ask for a password once, and those that 'write' (create, encrypt) that
# ask for a new password and confirm it, and 'read/write (rekey) that asks for the
# old password, then asks for a new one and confirms it.
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
action = context.CLIARGS['action']
# TODO: instead of prompting for these before, we could let VaultEditor
# call a callback when it needs it.
if action in ['decrypt', 'view', 'rekey', 'edit']:
vault_secrets = self.setup_vault_secrets(loader, vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'])
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
if action in ['encrypt', 'encrypt_string', 'create']:
encrypt_vault_id = None
# no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit'
if action not in ['edit']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
vault_secrets = None
vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if len(vault_secrets) > 1 and not encrypt_vault_id:
raise AnsibleOptionsError("The vault-ids %s are available to encrypt. Specify the vault-id to encrypt with --encrypt-vault-id" %
','.join([x[0] for x in vault_secrets]))
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
encrypt_secret = match_encrypt_secret(vault_secrets,
encrypt_vault_id=encrypt_vault_id)
# only one secret for encrypt for now, use the first vault_id and use its first secret
# TODO: exception if more than one?
self.encrypt_vault_id = encrypt_secret[0]
self.encrypt_secret = encrypt_secret[1]
if action in ['rekey']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
# print('encrypt_vault_id: %s' % encrypt_vault_id)
# print('default_encrypt_vault_id: %s' % default_encrypt_vault_id)
# new_vault_ids should only ever be one item, from
# load the default vault ids if we are using encrypt-vault-id
new_vault_ids = []
if encrypt_vault_id:
new_vault_ids = default_vault_ids
if context.CLIARGS['new_vault_id']:
new_vault_ids.append(context.CLIARGS['new_vault_id'])
new_vault_password_files = []
if context.CLIARGS['new_vault_password_file']:
new_vault_password_files.append(context.CLIARGS['new_vault_password_file'])
new_vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=new_vault_ids,
vault_password_files=new_vault_password_files,
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if not new_vault_secrets:
raise AnsibleOptionsError("A new vault password is required to use Ansible's Vault rekey")
# There is only one new_vault_id currently and one new_vault_secret, or we
# use the id specified in --encrypt-vault-id
new_encrypt_secret = match_encrypt_secret(new_vault_secrets,
encrypt_vault_id=encrypt_vault_id)
self.new_encrypt_vault_id = new_encrypt_secret[0]
self.new_encrypt_secret = new_encrypt_secret[1]
loader.set_vault_secrets(vault_secrets)
# FIXME: do we need to create VaultEditor here? its not reused
vault = VaultLib(vault_secrets)
self.editor = VaultEditor(vault)
context.CLIARGS['func']()
# and restore umask
os.umask(old_umask)
def execute_encrypt(self):
''' encrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading plaintext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
# Fixme: use the correct vau
self.editor.encrypt_file(f, self.encrypt_secret,
vault_id=self.encrypt_vault_id,
output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
@staticmethod
def format_ciphertext_yaml(b_ciphertext, indent=None, name=None):
indent = indent or 10
block_format_var_name = ""
if name:
block_format_var_name = "%s: " % name
block_format_header = "%s!vault |" % block_format_var_name
lines = []
vault_ciphertext = to_text(b_ciphertext)
lines.append(block_format_header)
for line in vault_ciphertext.splitlines():
lines.append('%s%s' % (' ' * indent, line))
yaml_ciphertext = '\n'.join(lines)
return yaml_ciphertext
def execute_encrypt_string(self):
''' encrypt the supplied string using the provided vault secret '''
b_plaintext = None
# Holds tuples (the_text, the_source_of_the_string, the variable name if its provided).
b_plaintext_list = []
# remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so
# we don't add it to the plaintext list
args = [x for x in context.CLIARGS['args'] if x != '-']
# We can prompt and read input, or read from stdin, but not both.
if context.CLIARGS['encrypt_string_prompt']:
msg = "String to encrypt: "
name = None
name_prompt_response = display.prompt('Variable name (enter for no name): ')
# TODO: enforce var naming rules?
if name_prompt_response != "":
name = name_prompt_response
# TODO: could prompt for which vault_id to use for each plaintext string
# currently, it will just be the default
# could use private=True for shadowed input if useful
prompt_response = display.prompt(msg)
if prompt_response == '':
raise AnsibleOptionsError('The plaintext provided from the prompt was empty, not encrypting')
b_plaintext = to_bytes(prompt_response)
b_plaintext_list.append((b_plaintext, self.FROM_PROMPT, name))
# read from stdin
if self.encrypt_string_read_stdin:
if sys.stdout.isatty():
display.display("Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a newline)", stderr=True)
stdin_text = sys.stdin.read()
if stdin_text == '':
raise AnsibleOptionsError('stdin was empty, not encrypting')
if sys.stdout.isatty() and not stdin_text.endswith("\n"):
display.display("\n")
b_plaintext = to_bytes(stdin_text)
# defaults to None
name = context.CLIARGS['encrypt_string_stdin_name']
b_plaintext_list.append((b_plaintext, self.FROM_STDIN, name))
# use any leftover args as strings to encrypt
# Try to match args up to --name options
if context.CLIARGS.get('encrypt_string_names', False):
name_and_text_list = list(zip(context.CLIARGS['encrypt_string_names'], args))
# Some but not enough --name's to name each var
if len(args) > len(name_and_text_list):
# Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that.
display.display('The number of --name options do not match the number of args.',
stderr=True)
display.display('The last named variable will be "%s". The rest will not have'
' names.' % context.CLIARGS['encrypt_string_names'][-1],
stderr=True)
# Add the rest of the args without specifying a name
for extra_arg in args[len(name_and_text_list):]:
name_and_text_list.append((None, extra_arg))
# if no --names are provided, just use the args without a name.
else:
name_and_text_list = [(None, x) for x in args]
# Convert the plaintext text objects to bytestrings and collect
for name_and_text in name_and_text_list:
name, plaintext = name_and_text
if plaintext == '':
raise AnsibleOptionsError('The plaintext provided from the command line args was empty, not encrypting')
b_plaintext = to_bytes(plaintext)
b_plaintext_list.append((b_plaintext, self.FROM_ARGS, name))
# TODO: specify vault_id per string?
# Format the encrypted strings and any corresponding stderr output
outputs = self._format_output_vault_strings(b_plaintext_list, vault_id=self.encrypt_vault_id)
for output in outputs:
err = output.get('err', None)
out = output.get('out', '')
if err:
sys.stderr.write(err)
print(out)
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
# TODO: offer block or string ala eyaml
def _format_output_vault_strings(self, b_plaintext_list, vault_id=None):
# If we are only showing one item in the output, we don't need to included commented
# delimiters in the text
show_delimiter = False
if len(b_plaintext_list) > 1:
show_delimiter = True
# list of dicts {'out': '', 'err': ''}
output = []
# Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook.
# For more than one input, show some differentiating info in the stderr output so we can tell them
# apart. If we have a var name, we include that in the yaml
for index, b_plaintext_info in enumerate(b_plaintext_list):
# (the text itself, which input it came from, its name)
b_plaintext, src, name = b_plaintext_info
b_ciphertext = self.editor.encrypt_bytes(b_plaintext, self.encrypt_secret,
vault_id=vault_id)
# block formatting
yaml_text = self.format_ciphertext_yaml(b_ciphertext, name=name)
err_msg = None
if show_delimiter:
human_index = index + 1
if name:
err_msg = '# The encrypted version of variable ("%s", the string #%d from %s).\n' % (name, human_index, src)
else:
err_msg = '# The encrypted version of the string #%d from %s.)\n' % (human_index, src)
output.append({'out': yaml_text, 'err': err_msg})
return output
def execute_decrypt(self):
''' decrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading ciphertext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
self.editor.decrypt_file(f, output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Decryption successful", stderr=True)
def execute_create(self):
''' create and open a file in an editor that will be encrypted with the provided vault secret when closed'''
if len(context.CLIARGS['args']) != 1:
raise AnsibleOptionsError("ansible-vault create can take only one filename argument")
self.editor.create_file(context.CLIARGS['args'][0], self.encrypt_secret,
vault_id=self.encrypt_vault_id)
def execute_edit(self):
''' open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed'''
for f in context.CLIARGS['args']:
self.editor.edit_file(f)
def execute_view(self):
''' open, decrypt and view an existing vaulted file using a pager using the supplied vault secret '''
for f in context.CLIARGS['args']:
# Note: vault should return byte strings because it could encrypt
# and decrypt binary files. We are responsible for changing it to
# unicode here because we are displaying it and therefore can make
# the decision that the display doesn't have to be precisely what
# the input was (leave that to decrypt instead)
plaintext = self.editor.plaintext(f)
self.pager(to_text(plaintext))
def execute_rekey(self):
''' re-encrypt a vaulted file with a new secret, the previous secret is required '''
for f in context.CLIARGS['args']:
# FIXME: plumb in vault_id, use the default new_vault_secret for now
self.editor.rekey_file(f, self.new_encrypt_secret,
self.new_encrypt_vault_id)
display.display("Rekey successful", stderr=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 71,618 |
Encrypt ansible-vault text while typing
|
##### SUMMARY
Customer request: When typing the string to encrypt, it shows up as plain text. Since the string is being encrypted, it's safe to assume that it's sensitive (e.g. passwords) and should therefore be hidden. At the very least, maybe this could be a command line option. If it is hidden, I'd prefer to be forced to enter it twice to avoid mismatch. To mitigate this issue, I am currently using "stty -echo", but that shouldn't be necessary.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible-vault
|
https://github.com/ansible/ansible/issues/71618
|
https://github.com/ansible/ansible/pull/73263
|
bc60d8ccda7a5a5bf0776c83f76c52663378b59c
|
823c72bcb59a5628c0ce21f2145f37f61bae6db9
| 2020-09-03T13:32:44Z |
python
| 2021-01-20T20:50:24Z |
test/units/cli/test_vault.py
|
# -*- coding: utf-8 -*-
# (c) 2017, Adrian Likins <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pytest
from units.compat import unittest
from units.compat.mock import patch, MagicMock
from units.mock.vault_helper import TextVaultSecret
from ansible import context, errors
from ansible.cli.vault import VaultCLI
from ansible.module_utils._text import to_text
from ansible.utils import context_objects as co
# TODO: make these tests assert something, likely by verifing
# mock calls
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
class TestVaultCli(unittest.TestCase):
def setUp(self):
self.tty_patcher = patch('ansible.cli.sys.stdin.isatty', return_value=False)
self.mock_isatty = self.tty_patcher.start()
def tearDown(self):
self.tty_patcher.stop()
def test_parse_empty(self):
cli = VaultCLI(['vaultcli'])
self.assertRaises(SystemExit,
cli.parse)
# FIXME: something weird seems to be afoot when parsing actions
# cli = VaultCLI(args=['view', '/dev/null/foo', 'mysecret3'])
# will skip '/dev/null/foo'. something in cli.CLI.set_action() ?
# maybe we self.args gets modified in a loop?
def test_parse_view_file(self):
cli = VaultCLI(args=['ansible-vault', 'view', '/dev/null/foo'])
cli.parse()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
def test_view_missing_file_no_secret(self, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = []
cli = VaultCLI(args=['ansible-vault', 'view', '/dev/null/foo'])
cli.parse()
self.assertRaisesRegexp(errors.AnsibleOptionsError,
"A vault password is required to use Ansible's Vault",
cli.run)
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
def test_encrypt_missing_file_no_secret(self, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = []
cli = VaultCLI(args=['ansible-vault', 'encrypt', '/dev/null/foo'])
cli.parse()
self.assertRaisesRegexp(errors.AnsibleOptionsError,
"A vault password is required to use Ansible's Vault",
cli.run)
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_encrypt(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'encrypt', '/dev/null/foo'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_encrypt_string(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'encrypt_string',
'some string to encrypt'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
@patch('ansible.cli.vault.display.prompt', return_value='a_prompt')
def test_encrypt_string_prompt(self, mock_display, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault',
'encrypt_string',
'--prompt',
'some string to encrypt'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
@patch('ansible.cli.vault.sys.stdin.read', return_value='This is data from stdin')
def test_encrypt_string_stdin(self, mock_stdin_read, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault',
'encrypt_string',
'--stdin-name',
'the_var_from_stdin',
'-'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_encrypt_string_names(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'encrypt_string',
'--name', 'foo1',
'--name', 'foo2',
'some string to encrypt'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_encrypt_string_more_args_than_names(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'encrypt_string',
'--name', 'foo1',
'some string to encrypt',
'other strings',
'a few more string args'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_create(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'create', '/dev/null/foo'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_edit(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'edit', '/dev/null/foo'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_decrypt(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'decrypt', '/dev/null/foo'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_view(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'view', '/dev/null/foo'])
cli.parse()
cli.run()
@patch('ansible.cli.vault.VaultCLI.setup_vault_secrets')
@patch('ansible.cli.vault.VaultEditor')
def test_rekey(self, mock_vault_editor, mock_setup_vault_secrets):
mock_setup_vault_secrets.return_value = [('default', TextVaultSecret('password'))]
cli = VaultCLI(args=['ansible-vault', 'rekey', '/dev/null/foo'])
cli.parse()
cli.run()
@pytest.mark.parametrize('cli_args, expected', [
(['ansible-vault', 'view', 'vault.txt'], 0),
(['ansible-vault', 'view', 'vault.txt', '-vvv'], 3),
(['ansible-vault', '-vv', 'view', 'vault.txt'], 2),
# Due to our manual parsing we want to verify that -v set in the sub parser takes precedence. This behaviour is
# deprecated and tests should be removed when the code that handles it is removed
(['ansible-vault', '-vv', 'view', 'vault.txt', '-v'], 1),
(['ansible-vault', '-vv', 'view', 'vault.txt', '-vvvv'], 4),
])
def test_verbosity_arguments(cli_args, expected, tmp_path_factory, monkeypatch):
# Add a password file so we don't get a prompt in the test
test_dir = to_text(tmp_path_factory.mktemp('test-ansible-vault'))
pass_file = os.path.join(test_dir, 'pass.txt')
with open(pass_file, 'w') as pass_fd:
pass_fd.write('password')
cli_args.extend(['--vault-id', pass_file])
# Mock out the functions so we don't actually execute anything
for func_name in [f for f in dir(VaultCLI) if f.startswith("execute_")]:
monkeypatch.setattr(VaultCLI, func_name, MagicMock())
cli = VaultCLI(args=cli_args)
cli.run()
assert context.CLIARGS['verbosity'] == expected
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,004 |
Confusing error message when using command module with non-existent executable and trying to access result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible.builtin.command` module does not return as expected when executable not found
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/valkra/code/ansible-config/ansible.cfg
configured module search path = ['/home/valkra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/valkra/.local/lib/python3.6/site-packages/ansible
executable location = /home/valkra/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/valkra/code/ansible-config/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/valkra/code/ansible-config/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_RETRIES(/home/valkra/code/ansible-config/ansible.cfg) = 4
CACHE_PLUGIN(/home/valkra/code/ansible-config/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/valkra/code/ansible-config/ansible.cfg) = $HOME/.ansible/
CACHE_PLUGIN_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 36000
DEFAULT_ASK_VAULT_PASS(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/valkra/code/ansible-config/ansible.cfg) = False
DEFAULT_FORKS(/home/valkra/code/ansible-config/ansible.cfg) = 50
DEFAULT_GATHERING(/home/valkra/code/ansible-config/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/valkra/code/ansible-config/ansible.cfg) = ['/home/valkra/code/ansible-config/hosts']
DEFAULT_NO_TARGET_SYSLOG(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/valkra/code/ansible-config/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 60
HOST_KEY_CHECKING(/home/valkra/code/ansible-config/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/valkra/code/ansible-config/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running locally (controller = target host) on ubuntu xenial. output of `uname -a`:
```
Linux kube-gollum 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just running the following playbook should illustrate the problem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- command: lecho hi
register: result
changed_when: "'hi' in result.stdout"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect to get a message about the executable `lecho` not being found, like one gets when replacing `command` with `shell` in the above playbook:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [shell] **********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"delta": "0:00:00.002938",
"end": "2020-12-17 10:15:29.944289",
"rc": 127,
"start": "2020-12-17 10:15:29.941351"
}
STDERR:
/bin/sh: 1: lecho: not found
MSG:
non-zero return code
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
or if one drops the `changed_when` condition that tries to evaluate the registered result, one also gets a more meaningful message even with `command`:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"rc": 2
}
MSG:
[Errno 2] No such file or directory: b'lecho': b'lecho'
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
I get a confusing error message with tries to tell me that my result var is not defined, even though there should always (?!) be a return value:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The conditional check ''hi' in result.stdout' failed. The error was: error while evaluating conditional ('hi' in result.stdout): Unable to look up a name or access an attribute in template st
ring ({% if 'hi' in result.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Playing with command with and without the conditional statement, it seems like the whole task fails when we try to evaluate the result, and we consequently don't get a result, while running the same task without the conditional that tries to evaluate the result does produce a result which can for example be inspected using a `block`-`rescue` debug construction... very strange. I suspect this is indeed a bug, and not a feature.
|
https://github.com/ansible/ansible/issues/73004
|
https://github.com/ansible/ansible/pull/73290
|
1934ca9a550b32f16d6fadffe7d8031059fa2526
|
e6da5443101cc815cb479965ab8d0e81c6d23333
| 2020-12-17T09:38:18Z |
python
| 2021-01-22T07:40:53Z |
changelogs/fragments/73004-let-command-always-return-stdout-and-stderr.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,004 |
Confusing error message when using command module with non-existent executable and trying to access result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible.builtin.command` module does not return as expected when executable not found
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/valkra/code/ansible-config/ansible.cfg
configured module search path = ['/home/valkra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/valkra/.local/lib/python3.6/site-packages/ansible
executable location = /home/valkra/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/valkra/code/ansible-config/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/valkra/code/ansible-config/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_RETRIES(/home/valkra/code/ansible-config/ansible.cfg) = 4
CACHE_PLUGIN(/home/valkra/code/ansible-config/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/valkra/code/ansible-config/ansible.cfg) = $HOME/.ansible/
CACHE_PLUGIN_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 36000
DEFAULT_ASK_VAULT_PASS(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/valkra/code/ansible-config/ansible.cfg) = False
DEFAULT_FORKS(/home/valkra/code/ansible-config/ansible.cfg) = 50
DEFAULT_GATHERING(/home/valkra/code/ansible-config/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/valkra/code/ansible-config/ansible.cfg) = ['/home/valkra/code/ansible-config/hosts']
DEFAULT_NO_TARGET_SYSLOG(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/valkra/code/ansible-config/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 60
HOST_KEY_CHECKING(/home/valkra/code/ansible-config/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/valkra/code/ansible-config/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running locally (controller = target host) on ubuntu xenial. output of `uname -a`:
```
Linux kube-gollum 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just running the following playbook should illustrate the problem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- command: lecho hi
register: result
changed_when: "'hi' in result.stdout"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect to get a message about the executable `lecho` not being found, like one gets when replacing `command` with `shell` in the above playbook:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [shell] **********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"delta": "0:00:00.002938",
"end": "2020-12-17 10:15:29.944289",
"rc": 127,
"start": "2020-12-17 10:15:29.941351"
}
STDERR:
/bin/sh: 1: lecho: not found
MSG:
non-zero return code
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
or if one drops the `changed_when` condition that tries to evaluate the registered result, one also gets a more meaningful message even with `command`:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"rc": 2
}
MSG:
[Errno 2] No such file or directory: b'lecho': b'lecho'
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
I get a confusing error message with tries to tell me that my result var is not defined, even though there should always (?!) be a return value:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The conditional check ''hi' in result.stdout' failed. The error was: error while evaluating conditional ('hi' in result.stdout): Unable to look up a name or access an attribute in template st
ring ({% if 'hi' in result.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Playing with command with and without the conditional statement, it seems like the whole task fails when we try to evaluate the result, and we consequently don't get a result, while running the same task without the conditional that tries to evaluate the result does produce a result which can for example be inspected using a `block`-`rescue` debug construction... very strange. I suspect this is indeed a bug, and not a feature.
|
https://github.com/ansible/ansible/issues/73004
|
https://github.com/ansible/ansible/pull/73290
|
1934ca9a550b32f16d6fadffe7d8031059fa2526
|
e6da5443101cc815cb479965ab8d0e81c6d23333
| 2020-12-17T09:38:18Z |
python
| 2021-01-22T07:40:53Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
get_unsupported_parameters,
get_type_validator,
handle_aliases,
list_deprecations,
list_no_log_values,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more.
Use of deferred_removals exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see issue #24560).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
""" Sanitize the keys in a container object by removing no_log values from key names.
This is a companion function to the `remove_values()` function. Similar to that function,
we make use of deferred_removals to avoid hitting maximum recursion depth in cases of
large data structures.
:param obj: The container object to sanitize. Non-container objects are returned unmodified.
:param no_log_strings: A set of string values we do not want logged.
:param ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._set_internal_properties()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
version=deprecation.get('version'), date=deprecation.get('date'),
collection_name=deprecation.get('collection_name'))
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], version=message.get('version'), date=message.get('date'),
collection_name=message.get('collection_name'))
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
unsupported_parameters = get_unsupported_parameters(spec, param, legal_inputs)
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
supported_parameters = list()
for key in sorted(spec.keys()):
if 'aliases' in spec[key] and spec[key]['aliases']:
supported_parameters.append("%s (%s)" % (key, ', '.join(sorted(spec[key]['aliases']))))
else:
supported_parameters.append(key)
msg += " Supported parameters include: %s" % (', '.join(supported_parameters))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value, param=None, prefix=''):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
from_msg = '{0!r}'.format(value)
to_msg = '{0!r}'.format(to_text(value))
if param is not None:
if prefix:
param = '{0}{1}'.format(prefix, param)
from_msg = '{0}: {1!r}'.format(param, value)
to_msg = '{0}: {1!r}'.format(param, to_text(value))
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value "{0}" (type {1.__class__.__name__}) was converted to "{2}" (type string). '
'If this does not look like what you expect, {3}').format(from_msg, value, to_msg, common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._set_internal_properties(spec, param)
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param, new_prefix)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
# Use the private method for 'str' type to handle the string conversion warning.
if wanted == 'str':
type_checker, wanted = self._check_type_str, 'str'
else:
type_checker, wanted = get_type_validator(wanted)
if type_checker is None:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted, string_types):
if isinstance(param, string_types):
kwargs['param'] = param
elif isinstance(param, dict):
kwargs['param'] = list(param.keys())[0]
for value in values:
try:
validated_params.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None, prefix=''):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(type_checker, string_types):
kwargs['param'] = list(param.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
param[k] = type_checker(value, **kwargs)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp().
# Traceback would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
path = os.environ.get('PATH', '')
old_env_vals['PATH'] = path
if path:
os.environ['PATH'] = "%s:%s" % (path_prefix, path)
else:
os.environ['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,004 |
Confusing error message when using command module with non-existent executable and trying to access result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible.builtin.command` module does not return as expected when executable not found
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/valkra/code/ansible-config/ansible.cfg
configured module search path = ['/home/valkra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/valkra/.local/lib/python3.6/site-packages/ansible
executable location = /home/valkra/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/valkra/code/ansible-config/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/valkra/code/ansible-config/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_RETRIES(/home/valkra/code/ansible-config/ansible.cfg) = 4
CACHE_PLUGIN(/home/valkra/code/ansible-config/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/valkra/code/ansible-config/ansible.cfg) = $HOME/.ansible/
CACHE_PLUGIN_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 36000
DEFAULT_ASK_VAULT_PASS(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/valkra/code/ansible-config/ansible.cfg) = False
DEFAULT_FORKS(/home/valkra/code/ansible-config/ansible.cfg) = 50
DEFAULT_GATHERING(/home/valkra/code/ansible-config/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/valkra/code/ansible-config/ansible.cfg) = ['/home/valkra/code/ansible-config/hosts']
DEFAULT_NO_TARGET_SYSLOG(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/valkra/code/ansible-config/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 60
HOST_KEY_CHECKING(/home/valkra/code/ansible-config/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/valkra/code/ansible-config/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running locally (controller = target host) on ubuntu xenial. output of `uname -a`:
```
Linux kube-gollum 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just running the following playbook should illustrate the problem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- command: lecho hi
register: result
changed_when: "'hi' in result.stdout"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect to get a message about the executable `lecho` not being found, like one gets when replacing `command` with `shell` in the above playbook:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [shell] **********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"delta": "0:00:00.002938",
"end": "2020-12-17 10:15:29.944289",
"rc": 127,
"start": "2020-12-17 10:15:29.941351"
}
STDERR:
/bin/sh: 1: lecho: not found
MSG:
non-zero return code
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
or if one drops the `changed_when` condition that tries to evaluate the registered result, one also gets a more meaningful message even with `command`:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"rc": 2
}
MSG:
[Errno 2] No such file or directory: b'lecho': b'lecho'
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
I get a confusing error message with tries to tell me that my result var is not defined, even though there should always (?!) be a return value:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The conditional check ''hi' in result.stdout' failed. The error was: error while evaluating conditional ('hi' in result.stdout): Unable to look up a name or access an attribute in template st
ring ({% if 'hi' in result.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Playing with command with and without the conditional statement, it seems like the whole task fails when we try to evaluate the result, and we consequently don't get a result, while running the same task without the conditional that tries to evaluate the result does produce a result which can for example be inspected using a `block`-`rescue` debug construction... very strange. I suspect this is indeed a bug, and not a feature.
|
https://github.com/ansible/ansible/issues/73004
|
https://github.com/ansible/ansible/pull/73290
|
1934ca9a550b32f16d6fadffe7d8031059fa2526
|
e6da5443101cc815cb479965ab8d0e81c6d23333
| 2020-12-17T09:38:18Z |
python
| 2021-01-22T07:40:53Z |
test/integration/targets/command_nonexisting/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,004 |
Confusing error message when using command module with non-existent executable and trying to access result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible.builtin.command` module does not return as expected when executable not found
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/valkra/code/ansible-config/ansible.cfg
configured module search path = ['/home/valkra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/valkra/.local/lib/python3.6/site-packages/ansible
executable location = /home/valkra/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/valkra/code/ansible-config/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/valkra/code/ansible-config/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_RETRIES(/home/valkra/code/ansible-config/ansible.cfg) = 4
CACHE_PLUGIN(/home/valkra/code/ansible-config/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/valkra/code/ansible-config/ansible.cfg) = $HOME/.ansible/
CACHE_PLUGIN_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 36000
DEFAULT_ASK_VAULT_PASS(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/valkra/code/ansible-config/ansible.cfg) = False
DEFAULT_FORKS(/home/valkra/code/ansible-config/ansible.cfg) = 50
DEFAULT_GATHERING(/home/valkra/code/ansible-config/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/valkra/code/ansible-config/ansible.cfg) = ['/home/valkra/code/ansible-config/hosts']
DEFAULT_NO_TARGET_SYSLOG(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/valkra/code/ansible-config/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 60
HOST_KEY_CHECKING(/home/valkra/code/ansible-config/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/valkra/code/ansible-config/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running locally (controller = target host) on ubuntu xenial. output of `uname -a`:
```
Linux kube-gollum 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just running the following playbook should illustrate the problem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- command: lecho hi
register: result
changed_when: "'hi' in result.stdout"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect to get a message about the executable `lecho` not being found, like one gets when replacing `command` with `shell` in the above playbook:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [shell] **********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"delta": "0:00:00.002938",
"end": "2020-12-17 10:15:29.944289",
"rc": 127,
"start": "2020-12-17 10:15:29.941351"
}
STDERR:
/bin/sh: 1: lecho: not found
MSG:
non-zero return code
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
or if one drops the `changed_when` condition that tries to evaluate the registered result, one also gets a more meaningful message even with `command`:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"rc": 2
}
MSG:
[Errno 2] No such file or directory: b'lecho': b'lecho'
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
I get a confusing error message with tries to tell me that my result var is not defined, even though there should always (?!) be a return value:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The conditional check ''hi' in result.stdout' failed. The error was: error while evaluating conditional ('hi' in result.stdout): Unable to look up a name or access an attribute in template st
ring ({% if 'hi' in result.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Playing with command with and without the conditional statement, it seems like the whole task fails when we try to evaluate the result, and we consequently don't get a result, while running the same task without the conditional that tries to evaluate the result does produce a result which can for example be inspected using a `block`-`rescue` debug construction... very strange. I suspect this is indeed a bug, and not a feature.
|
https://github.com/ansible/ansible/issues/73004
|
https://github.com/ansible/ansible/pull/73290
|
1934ca9a550b32f16d6fadffe7d8031059fa2526
|
e6da5443101cc815cb479965ab8d0e81c6d23333
| 2020-12-17T09:38:18Z |
python
| 2021-01-22T07:40:53Z |
test/integration/targets/command_nonexisting/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,004 |
Confusing error message when using command module with non-existent executable and trying to access result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible.builtin.command` module does not return as expected when executable not found
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
command
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = /home/valkra/code/ansible-config/ansible.cfg
configured module search path = ['/home/valkra/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/valkra/.local/lib/python3.6/site-packages/ansible
executable location = /home/valkra/.local/bin/ansible
python version = 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/valkra/code/ansible-config/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/valkra/code/ansible-config/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_RETRIES(/home/valkra/code/ansible-config/ansible.cfg) = 4
CACHE_PLUGIN(/home/valkra/code/ansible-config/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/valkra/code/ansible-config/ansible.cfg) = $HOME/.ansible/
CACHE_PLUGIN_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 36000
DEFAULT_ASK_VAULT_PASS(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_BECOME_ASK_PASS(/home/valkra/code/ansible-config/ansible.cfg) = False
DEFAULT_FORKS(/home/valkra/code/ansible-config/ansible.cfg) = 50
DEFAULT_GATHERING(/home/valkra/code/ansible-config/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/valkra/code/ansible-config/ansible.cfg) = ['/home/valkra/code/ansible-config/hosts']
DEFAULT_NO_TARGET_SYSLOG(/home/valkra/code/ansible-config/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/valkra/code/ansible-config/ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/valkra/code/ansible-config/ansible.cfg) = 60
HOST_KEY_CHECKING(/home/valkra/code/ansible-config/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/valkra/code/ansible-config/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
running locally (controller = target host) on ubuntu xenial. output of `uname -a`:
```
Linux kube-gollum 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just running the following playbook should illustrate the problem:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- command: lecho hi
register: result
changed_when: "'hi' in result.stdout"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I would expect to get a message about the executable `lecho` not being found, like one gets when replacing `command` with `shell` in the above playbook:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [shell] **********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"delta": "0:00:00.002938",
"end": "2020-12-17 10:15:29.944289",
"rc": 127,
"start": "2020-12-17 10:15:29.941351"
}
STDERR:
/bin/sh: 1: lecho: not found
MSG:
non-zero return code
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
or if one drops the `changed_when` condition that tries to evaluate the registered result, one also gets a more meaningful message even with `command`:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "lecho hi",
"rc": 2
}
MSG:
[Errno 2] No such file or directory: b'lecho': b'lecho'
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
I get a confusing error message with tries to tell me that my result var is not defined, even though there should always (?!) be a return value:
```
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [command] ********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The conditional check ''hi' in result.stdout' failed. The error was: error while evaluating conditional ('hi' in result.stdout): Unable to look up a name or access an attribute in template st
ring ({% if 'hi' in result.stdout %} True {% else %} False {% endif %}).
Make sure your variable name does not contain invalid characters like '-': argument of type 'AnsibleUndefined' is not iterable
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
Playing with command with and without the conditional statement, it seems like the whole task fails when we try to evaluate the result, and we consequently don't get a result, while running the same task without the conditional that tries to evaluate the result does produce a result which can for example be inspected using a `block`-`rescue` debug construction... very strange. I suspect this is indeed a bug, and not a feature.
|
https://github.com/ansible/ansible/issues/73004
|
https://github.com/ansible/ansible/pull/73290
|
1934ca9a550b32f16d6fadffe7d8031059fa2526
|
e6da5443101cc815cb479965ab8d0e81c6d23333
| 2020-12-17T09:38:18Z |
python
| 2021-01-22T07:40:53Z |
test/units/module_utils/basic/test_command_nonexisting.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,089 |
Please document the hash_behaviour=merge deprecation decision and recommended replacement
|
I only just found out about `hash_behavior=merge` being deprecated today. And then promptly discovered that any issues talking about it were locked, and I could find no explanation of why it's deprecated, beyond that it is supposedly fragile.
To be blunt, I don't agree with the decision. In fact I've always thought it was rather weird that it isn't the default behavior. I have been successfully relying on it for multiple years with dozens of servers. I've never had any major issues with it. So I cannot see the evidence that it is "fragile".
Since the [combine filter](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is the recommended replacement, I did read those docs in detail.
Frankly, how is that a replacement? It merges two different hashes. That is entirely different than combining hashes with the same names and keys from group_vars, host_vars, role defaults, role vars, playbooks, the command line, and I'm sure other places I'm forgetting...
If there is a way, I'm not seeing it right now. And even then, you trade the complexity of making sure your group and host vars are set up right, for the complexity of using a filter the correct way. You also break the functionality for anyone who uses roles not built for it. (Side note, filter syntax is horrid. It is not easy to read. That is a serious problem when trying to debug things.)
I get that it is far too late to change things, but please respect those of us who are having their floors dropped out from under them by this decision, enough to give us an explanation.
I'd also ask that you go unlock comments on the issues you've locked. https://github.com/ansible/ansible/issues/72669 and https://github.com/ansible/ansible/issues/72421 are the ones I found. Or at least leave a comment directing us to a proper location to discuss things. The way things are right now is disrespectful. Not something I expected from this project.
Finally, a detailed post on how to convert from `hash_behaviour=merge` to using the combine filter should be written. [The combine filter docs](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) are not enough. If you are going to tell people they need to change how they do something, you need to provide them with the knowledge to make the change. (If you're actually telling us there is no way to do what we want, then say so. We'll go find a better tool.)
|
https://github.com/ansible/ansible/issues/73089
|
https://github.com/ansible/ansible/pull/73328
|
dec443e3a5b868a7a580f178a3739ab438e8c0db
|
e0c9f285ff4f53961a391d798670295b70dc37a9
| 2020-12-31T00:58:58Z |
python
| 2021-01-22T20:00:19Z |
changelogs/fragments/undo_hashmerge_depr.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,089 |
Please document the hash_behaviour=merge deprecation decision and recommended replacement
|
I only just found out about `hash_behavior=merge` being deprecated today. And then promptly discovered that any issues talking about it were locked, and I could find no explanation of why it's deprecated, beyond that it is supposedly fragile.
To be blunt, I don't agree with the decision. In fact I've always thought it was rather weird that it isn't the default behavior. I have been successfully relying on it for multiple years with dozens of servers. I've never had any major issues with it. So I cannot see the evidence that it is "fragile".
Since the [combine filter](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is the recommended replacement, I did read those docs in detail.
Frankly, how is that a replacement? It merges two different hashes. That is entirely different than combining hashes with the same names and keys from group_vars, host_vars, role defaults, role vars, playbooks, the command line, and I'm sure other places I'm forgetting...
If there is a way, I'm not seeing it right now. And even then, you trade the complexity of making sure your group and host vars are set up right, for the complexity of using a filter the correct way. You also break the functionality for anyone who uses roles not built for it. (Side note, filter syntax is horrid. It is not easy to read. That is a serious problem when trying to debug things.)
I get that it is far too late to change things, but please respect those of us who are having their floors dropped out from under them by this decision, enough to give us an explanation.
I'd also ask that you go unlock comments on the issues you've locked. https://github.com/ansible/ansible/issues/72669 and https://github.com/ansible/ansible/issues/72421 are the ones I found. Or at least leave a comment directing us to a proper location to discuss things. The way things are right now is disrespectful. Not something I expected from this project.
Finally, a detailed post on how to convert from `hash_behaviour=merge` to using the combine filter should be written. [The combine filter docs](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) are not enough. If you are going to tell people they need to change how they do something, you need to provide them with the knowledge to make the change. (If you're actually telling us there is no way to do what we want, then say so. We'll go find a better tool.)
|
https://github.com/ansible/ansible/issues/73089
|
https://github.com/ansible/ansible/pull/73328
|
dec443e3a5b868a7a580f178a3739ab438e8c0db
|
e0c9f285ff4f53961a391d798670295b70dc37a9
| 2020-12-31T00:58:58Z |
python
| 2021-01-22T20:00:19Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ALLOW_WORLD_READABLE_TMPFILES:
name: Allow world-readable temporary files
deprecated:
why: moved to a per plugin approach that is more flexible
version: "2.14"
alternatives: mostly the same config will work, but now controlled from the plugin itself and not using the general constant.
default: False
description:
- This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task.
- It is useful when becoming an unprivileged user.
env: []
ini:
- {key: allow_world_readable_tmpfiles, section: defaults}
type: boolean
yaml: {key: defaults.allow_world_readable_tmpfiles}
version_added: "2.1"
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- This can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This option is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
type: boolean
yaml: {key: plugins.connection.pipelining}
ANSIBLE_SSH_ARGS:
# TODO: move to ssh plugin
default: -C -o ControlMaster=auto -o ControlPersist=60s
description:
- If set, this will override the Ansible default ssh arguments.
- In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate.
- Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used.
env: [{name: ANSIBLE_SSH_ARGS}]
ini:
- {key: ssh_args, section: ssh_connection}
yaml: {key: ssh_connection.ssh_args}
ANSIBLE_SSH_CONTROL_PATH:
# TODO: move to ssh plugin
default: null
description:
- This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution.
- Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`.
- Be aware that this setting is ignored if `-o ControlPath` is set in ssh args.
env: [{name: ANSIBLE_SSH_CONTROL_PATH}]
ini:
- {key: control_path, section: ssh_connection}
yaml: {key: ssh_connection.control_path}
ANSIBLE_SSH_CONTROL_PATH_DIR:
# TODO: move to ssh plugin
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the `%(directory)s` variable for the control path setting.
env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: ssh_connection}
yaml: {key: ssh_connection.control_path_dir}
ANSIBLE_SSH_EXECUTABLE:
# TODO: move to ssh plugin, note that ssh_utils refs this and needs to be updated if removed
default: ssh
description:
- This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH.
- This option is usually not required, it might be useful when access to system ssh is restricted,
or when using ssh wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
yaml: {key: ssh_connection.ssh_executable}
version_added: "2.2"
ANSIBLE_SSH_RETRIES:
# TODO: move to ssh plugin
default: 0
description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE'
env: [{name: ANSIBLE_SSH_RETRIES}]
ini:
- {key: retries, section: ssh_connection}
type: integer
yaml: {key: ssh_connection.retries}
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: enable/disable scanning sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``~/.ansible/collections``,
and you want to add ``my.collection`` to that directory, it must be saved as
``~/.ansible/collections/ansible_collections/my/collection``.
default: ~/.ansible/collections:/usr/share/ansible/collections
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (via the collection metadata key
`requires_ansible`), the default behavior is to issue a warning and continue anyway. Setting this value to `ignore`
skips the warning entirely, while setting it to `fatal` will immediately halt Ansible execution.
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: [error, warning, ignore]
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONDITIONAL_BARE_VARS:
name: Allow bare variable evaluation in conditionals
default: False
type: boolean
description:
- With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated
directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans.
- With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore.
- Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future.
- Expect that this setting eventually will be deprecated after 2.12
env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}]
ini:
- {key: conditional_bare_variables, section: defaults}
version_added: "2.8"
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
COMMAND_WARNINGS:
name: Command module warnings
default: False
description:
- Ansible can issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module.
- These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``.
- As of version 2.11, this is disabled by default.
env: [{name: ANSIBLE_COMMAND_WARNINGS}]
ini:
- {key: command_warnings, section: defaults}
type: boolean
version_added: "1.8"
deprecated:
why: the command warnings feature is being removed
version: "2.14"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backwards-compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not needed to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ''
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
CALLABLE_ACCEPT_LIST:
name: Template 'callable' accept list
default: []
description: Whitelist of callable methods to be made available to template evaluation
env:
- name: ANSIBLE_CALLABLE_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLABLE_ENABLED'
- name: ANSIBLE_CALLABLE_ENABLED
version_added: '2.11'
ini:
- key: callable_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callable_enabled'
- key: callable_enabled
section: defaults
version_added: '2.11'
type: list
CONTROLLER_PYTHON_WARNING:
name: Running Older than Python 3.8 Warning
default: True
description: Toggle to control showing warnings related to running a Python version
older than Python 3.8 on the controller
env: [{name: ANSIBLE_CONTROLLER_PYTHON_WARNING}]
ini:
- {key: controller_python_warning, section: defaults}
type: boolean
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callback_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
default: ~
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied M(ansible.builtin.setup) task when using fact gathering."
- "If not set, it will fallback to the default from the M(ansible.builtin.setup) module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the M(ansible.builtin.setup) module."
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
yaml: {key: facts.gathering.fact_path}
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
- "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play."
- "The 'smart' value means each new host that has no facts discovered will be scanned,
but if the same host is addressed in multiple plays it will not be contacted again in the playbook run."
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices: ['smart', 'explicit', 'implicit']
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
default: ['all']
description:
- Set the `gather_subset` option for the M(ansible.builtin.setup) task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
default: 10
description:
- Set the timeout in seconds for the implicit fact gathering.
- "It does **not** apply to user defined M(ansible.builtin.setup) tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
yaml: {key: defaults.gather_timeout}
DEFAULT_HANDLER_INCLUDES_STATIC:
name: Make handler M(ansible.builtin.include) static
default: False
description:
- "Since 2.0 M(ansible.builtin.include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'."
env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}]
ini:
- {key: handler_includes_static, section: defaults}
type: boolean
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: none as its already built into the decision between include_tasks and import_tasks
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices: ["replace", "merge"]
description:
- This setting controls how variables merge in Ansible.
By default Ansible will override variables in specific precedence orders, as described in Variables.
When a variable of higher precedence wins, it will replace the other value.
- "Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged.
This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars
(integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it,
and playbooks in the official examples repos do not use this setting"
- In version 2.0 a ``combine`` filter was added to allow doing this for a particular variable (described in Filters).
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
deprecated:
why: this feature is fragile and not portable, leading to continual confusion and misuse
version: "2.13"
alternatives: the ``combine`` filter explicitly
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: LIBVIRT_LXC_NOSECLABEL
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_LIBVIRT_LXC_NOSECLABEL`` environment variable
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: ~/.ansible/tmp
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for M(ansible.builtin.template) and M(ansible.windows.win_template) modules. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ''
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: none
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
default:
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SCP_IF_SSH:
# TODO: move to ssh plugin
default: smart
description:
- "Preferred method to use when transferring files over ssh."
- When set to smart, Ansible will try them until one succeeds or they all fail.
- If set to True, it will force 'scp', if False it will use 'sftp'.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_SFTP_BATCH_MODE:
# TODO: move to ssh plugin
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: boolean
yaml: {key: ssh_connection.sftp_batch_mode}
DEFAULT_SSH_TRANSFER_METHOD:
# TODO: move to ssh plugin
default:
description: 'unused?'
# - "Preferred method to use when transferring files over ssh"
# - Setting to smart will try them until one succeeds or they all fail
#choices: ['sftp', 'scp', 'dd', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output, you can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TASK_INCLUDES_STATIC:
name: Task include static
default: False
description:
- The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task.
env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}]
ini:
- {key: task_includes_static, section: defaults}
type: boolean
version_added: "2.1"
deprecated:
why: include itself is deprecated and this setting will not matter in the future
version: "2.12"
alternatives: None, as its already built into the decision between include_tasks and import_tasks
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
default:
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_DISPLAY_SKIPPED_HOSTS`` environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: ['warn', 'error', 'ignore']
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}]
ini:
- {key: connection_facts_modules, section: defaults}
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
default: False
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role or collection skeleton directory
default:
description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: ~/.ansible/galaxy_token
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: ~/.ansible/galaxy_cache
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
HOST_KEY_CHECKING:
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices: ['warning', 'error', 'ignore']
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto_legacy
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes
employ a lookup table to use the included system Python (on distributions known to include one), falling back to a
fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The
fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed
later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default
value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases
that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the
default behavior will change to that of ``auto`` in a future Ansible release.
INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
centos: &rhelish
'6': /usr/bin/python
'8': /usr/libexec/platform-python
debian:
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
redhat: *rhelish
rhel: *rhelish
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- /usr/bin/python
- python3.7
- python3.6
- python3.5
- python2.7
- python2.6
- /usr/libexec/platform-python
- /usr/bin/python3
- python
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
- If 'never' it will allow for the group name but warn about the issue.
- When 'ignore', it does the same as 'never', without issuing a warning.
- When 'always' it will replace any invalid characters with '_' (underscore) and warn the user
- When 'silently', it does the same as 'always', without issuing a warning.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices: ['always', 'never', 'ignore', 'silently']
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description: Toggle to turn on inventory caching
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_facts
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: NETWORK_GROUP_MODULES
deprecated:
why: environment variables without ``ANSIBLE_`` prefix are deprecated
version: "2.12"
alternatives: the ``ANSIBLE_NETWORK_GROUP_MODULES`` environment variable
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: ~/.ansible/pc
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
- The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory.
- The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory.
- The ``all`` option examines from the first parent to the current playbook.
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices: [ top, bottom, all ]
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: default
deprecated:
why: specifying "plugin_filters_cfg" under the "default" section is deprecated
version: "2.12"
alternatives: the "defaults" section instead
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
- Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
- Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices: ['demand', 'start']
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Whitelist for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,300 |
[Docs] Selecting JSON data: json_query and jmespath dependancy.
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
There is a note where it says that json_query is **built upon** jmespath. To me this does not imply that jmespath is a direct dependency of json_query, and must therefore be installed on the controlling host.
Furthermore, the bit that follows about its syntax being compatible with that of jmespath is somewhat ambiguous, and further diverts from the fact that jmespath is a dependency that must be manually installed.
My suggestion would be something like:
**!Note**
Additionally, you must manually install the **jmespath** dependency.
**json_query** uses **jmespath** compatible syntax. For examples, see [jmespath's documentation](http://jmespath.org/examples.html).
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/73300
|
https://github.com/ansible/ansible/pull/73302
|
e0c9f285ff4f53961a391d798670295b70dc37a9
|
7f6fcc3407c52fbcf226a497eae9b9a371223a28
| 2021-01-19T21:35:55Z |
python
| 2021-01-22T20:41:24Z |
docs/docsite/rst/user_guide/playbooks_filters.rst
|
.. _playbooks_filters:
********************************
Using filters to manipulate data
********************************
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-base repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined::
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``::
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``::
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with::
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Defining different values for true/false/null (ternary)
=======================================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9)::
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8)::
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Managing data types
===================
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable::
{{ myvar | type_debug }}
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`::
{{ dict | dict2items }}
Dictionary data (before applying the ``dict2items`` filter)::
tags:
Application: payment
Environment: dev
List data (after applying the ``dict2items`` filter)::
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output::
{{ files | dict2items(key_name='file', value_name='path') }}
Dictionary data (before applying the ``dict2items`` filter)::
files:
users: /etc/passwd
groups: /etc/group
List data (after applying the ``dict2items`` filter)::
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs::
{{ tags | items2dict }}
List data (before applying the ``items2dict`` filter)::
tags:
- key: Application
value: payment
- key: Environment
value: dev
Dictionary data (after applying the ``items2dict`` filter)::
Application: payment
Environment: dev
The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
Not all lists use ``key`` to designate keys and ``value`` to designate values. For example::
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example::
{{ tags | items2dict(key_name='fruit', value_name='color') }}
If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string::
- debug:
msg: test
when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string::
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Formatting data: YAML and JSON
==============================
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging::
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use::
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format::
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example::
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_.
If you are reading in some already formatted data::
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example::
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
Filter `to_json` and Unicode support
------------------------------------
By default `to_json` and `to_nice_json` will convert data received to ASCII, so::
{{ 'München'| to_json }}
will return::
'M\u00fcnchen'
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter::
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example::
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
.. _zip_filter:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``::
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | zip(['a','b','c','d','e','f']) | list }}"
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
To always exhaust all lists use ``zip_longest``::
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``::
{{ dict(keys_list | zip(values_list)) }}
List data (before applying the ``zip`` filter)::
keys_list:
- one
- two
values_list:
- apple
- orange
Dictionary data (after applying the ``zip`` filter)::
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression::
{{ users | subelements('groups', skip_missing=True) }}
Data before applying the ``subelements`` filter::
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Data after applying the ``subelements`` filter::
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects::
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash::
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be::
{'a':1, 'b':3}
The filter can also take multiple arguments to merge::
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged::
{{ default | combine(patch) }}
This would result in::
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys::
{{ default | combine(patch, recursive=True) }}
This would result in::
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash::
default:
a:
- default
patch:
a:
- patch
.. code-block:: jinja
{{ default | combine(patch) }}
This would result in::
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept::
{{ default | combine(patch, list_merge='keep') }}
This would result in::
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash::
{{ default | combine(patch, list_merge='append') }}
This would result in::
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash::
{{ default | combine(patch, list_merge='prepend') }}
This would result in::
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept::
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in::
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended::
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in::
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together::
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in::
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array)::
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be::
['x', 'z']
[42, 31]
The filter can take another argument::
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive lookup inside the container::
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list::
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size::
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example::
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in::
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
.. note:: This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_.
Consider this data structure::
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query::
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names::
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1::
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string::
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_::
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster::
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1'::
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1'::
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with '52:54:00'::
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses::
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
.. _random_filter:
Random items or numbers
-----------------------
The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list::
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 and a specified number::
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
To get a random number from 0 to 100 but in steps of 10::
{{ 101 | random(step=10) }}
# => 70
To get a random number from 1 to 100 but in steps of 10::
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
You can initialize the random number generator from a seed to create random-but-idempotent numbers::
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list::
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order::
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
Managing list variables
=======================
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers::
{{ list1 | min }}
.. versionadded:: 2.11
To get the minimum value in a list of objects::
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
To get the maximum value from a list of numbers::
{{ [3, 4, 2] | max }}
.. versionadded:: 2.11
To get the maximum value in a list of objects::
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does)::
{{ [3, [4, 2] ] | flatten }}
Flatten only the first level of a list (akin to the `items` lookup)::
{{ [3, [4, [2]] ] | flatten(levels=1) }}
.. versionadded:: 2.11
Preserve nulls in a list, by default flatten removes them. ::
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
.. _set_theory_filters:
Selecting from sets or lists (set theory)
=========================================
You can select or combine items from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list::
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
To get a union of two lists::
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
To get the intersection of 2 lists (unique list of all items in both)::
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
To get the difference of 2 lists (items in 1 that don't exist in 2)::
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
To get the symmetric difference of 2 lists (items exclusive to each list)::
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
.. _math_stuff:
Calculating numbers (math)
==========================
.. versionadded:: 1.9
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e)::
{{ myvar | log }}
Get the base 10 logarithm::
{{ myvar | log(10) }}
Give me the power of 2! (or 5)::
{{ myvar | pow(2) }}
{{ myvar | pow(5) }}
Square root, or the 5th::
{{ myvar | root }}
{{ myvar | root(5) }}
Managing network interactions
=============================
These filters help you with common network tasks.
.. note::
These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address::
{{ myvar | ansible.netcommon.ipaddr }}
You can also require a specific IP protocol version::
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use::
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter::
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter::
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter::
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.::
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note::
For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list::
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
This example renders the following sorted list::
['100,1688,3002-3005,3999']
Another example Jinja template::
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Encrypting and checksumming strings and passwords
=================================================
.. versionadded:: 1.9
To get the sha1 hash of a string::
{{ 'test1' | hash('sha1') }}
To get the md5 hash of a string::
{{ 'test1' | hash('md5') }}
Get a string checksum::
{{ 'test2' | checksum }}
Other hashes (platform dependent)::
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt)::
{{ 'passwordsaresecret' | password_hash('sha512') }}
To get a sha256 password hash with a specific salt::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs::
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
Hash types available depend on the control system running Ansible, 'hash' depends on hashlib, password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html).
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
.. _other_useful_filters:
Manipulating text
=================
Several filters work with text, including URLs, file names, and path names.
.. _comment_filter:
Adding comments to files
------------------------
The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following::
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``)::
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter::
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style::
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: jinja
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter::
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields::
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search a string with a regex, use the "regex_search" filter::
# search for "foo" in "foobar"
{{ 'foobar' | regex_search('(foo)') }}
# will return empty if it cannot find a match
{{ 'ansible' | regex_search('(foobar)') }}
# case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search("^bar", multiline=True, ignorecase=True) }}
To search for all occurrences of regex matches, use the "regex_findall" filter::
# Return a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# change a multiline string
{{ var | regex_replace('^', '#CommentThis#', multiline=True) }}
.. note::
If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements::
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note::
Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the "regex_escape" filter (using the default re_type='python' option)::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the "regex_escape" filter with the re_type='posix_basic' option::
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Managing file names and path names
----------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0)::
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0)::
{{ path | win_splitdrive }}
To get only the windows drive letter::
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter::
{{ path | win_splitdrive | last }}
To get the directory from a path::
{{ path | dirname }}
To get the directory from a windows path (new version 2.0)::
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5)::
{{ path | expanduser }}
To expand a path containing environment variables::
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8)::
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7)::
{{ path | relpath('/etc') }}
To get the root and extension of a path or file name (new in version 2.0)::
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
The ``splitext`` filter returns a string. The individual components can be accessed by using the ``first`` and ``last`` filters::
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be 'conf'
{{ path | splitext | last }}
To join one or more path components::
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
Manipulating strings
====================
To add quotes for shell usage::
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
To concatenate a list into a string::
{{ list | join(" ") }}
To work with Base64 encoded strings::
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``::
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
.. versionadded:: 2.6
Managing UUIDs
==============
To create a namespaced UUIDv5::
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E'::
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:map>`::
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Handling dates and times
========================
To get a date object from a string use the `to_datetime` filter::
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter::
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
Getting Kubernetes resource names
=================================
.. note::
These filters have migrated to the `community.kubernetes <https://galaxy.ansible.com/community/kubernetes>`_ collection. Follow the installation instructions to install that collection.
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash::
{{ configmap_resource_definition | community.kubernetes.k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications::
my_secret:
kind: Secret
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | community.kubernetes.k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,310 |
Difficult to use file-type modules inside of containers because of how module utils is implemented
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
A task running locally that uses a file-type module can encounter a traceback due to a “invalid cross-device link” when ansible is running inside of a container with mounted volumes. This is due to a common practice where a module writes to `ansible_remote_tmp` as a staging area, and then uses [module_utils](https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328) to copy it over.
The error happens when either `ansible_remote_tmp` is outside the volume and the destination is inside the volume, or vice versa.
This is an example, but it applies to other modules:
```
- name: write something to a file in the volume (this fails)
copy:
content: "foo_bar"
dest: "/elijah/file_for_elijah.txt"
```
Here, `/elijah` is the mounted volume.
What the task wants:
I have a string “foo_bar”
Write this string to a new file
This file is in a mounted volume
I have permission to write to this file and volume
No cross-volume writing necessary from user perspective
What ansible does:
Writes a temporary file to /tmp
This is from their Ansible tmp dir setting, which is configurable
Tries to move this file from the tmp location to the mounted volume
The user never told them to do this
Permission error due to attempted cross-volume move
Moves can be allowed by making “unsafe”
Running inside of a container with mounted volumes is fundamental to the design of execution environments.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
Tested with ansible 2.9.15 and devel
Reproducer provided requires containers.podman collection
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<none>
```
##### OS / ENVIRONMENT
Redhat type linux with podman installed
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook requirements:
ansible-galaxy collection install containers.podman
Have podman installed locally
Run this playbook locally against localhost inventory:
https://github.com/AlanCoding/utility-playbooks/blob/master/issues/elijah.yml
```
ansible-playbook -i localhost, elijah.yml
```
This:
Starts a container, with `/elijah` (in the container) mounted to the `playbook_dir` (on host machine)
Templates a file outside the volume mount (successful)
Templates a file inside the volume mount (not successful)
Stops and removes the container
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Able to do basic file operations in execution environments (inside a container), using the host as a staging area
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2367, in atomic_move
os.rename(b_src, b_dest)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/file_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2409, in atomic_move
shutil.move(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2413, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
```
#### ADDITIONAL DETAILS
Ansible is creating the cross-volume write problem, and then requiring the user to solve it by setting “unsafe_writes”. A global setting as suggested by @bcoca in https://github.com/ansible/ansible/pull/73282 allows them to set it just once, which is better than on each task, or perhaps Tower could set it for them in our use case, but it is a broad sweeping selection that also applies to remote hosts, and that may be undesirable.
We would prefer a solution that doesn’t require the user to solve an implementation problem in ansible.
Hypothetically, the module could detect what volume the destination is in, and use a tmp stage file in that volume.
This is a tower_blocker because we need to replace our use of bwrap and execution environments are in the critical path. We also need users to be able to run their content that used to work in the new process isolation method, which uses containers. Tower setting a global variable that would affect both in-container as well as remote tasks is not acceptable because users security preferences may not allow them to set this for remote tasks, and as mentioned, their defined file operations do not explicitly implicate cross volume mounts. This is purely an ansible imposed situation.
Technical details:
This is a problem shared between many modules because the issue lies in Ansible core module utils
Specific problematic method is: https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328
The traceback (using Ansible devel) will be pasted below
@AlanCoding @chrismeyersfsu and @shanemcd are good contacts on the awx/tower team
|
https://github.com/ansible/ansible/issues/73310
|
https://github.com/ansible/ansible/pull/73282
|
2b0cd2c13f2021f839600d601f75dea2c0343ed1
|
c7d4acc12f672d1b3a86119940193b3324584ac0
| 2021-01-20T19:05:10Z |
python
| 2021-01-27T19:16:10Z |
changelogs/fragments/unsafe_writes_env.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,310 |
Difficult to use file-type modules inside of containers because of how module utils is implemented
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
A task running locally that uses a file-type module can encounter a traceback due to a “invalid cross-device link” when ansible is running inside of a container with mounted volumes. This is due to a common practice where a module writes to `ansible_remote_tmp` as a staging area, and then uses [module_utils](https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328) to copy it over.
The error happens when either `ansible_remote_tmp` is outside the volume and the destination is inside the volume, or vice versa.
This is an example, but it applies to other modules:
```
- name: write something to a file in the volume (this fails)
copy:
content: "foo_bar"
dest: "/elijah/file_for_elijah.txt"
```
Here, `/elijah` is the mounted volume.
What the task wants:
I have a string “foo_bar”
Write this string to a new file
This file is in a mounted volume
I have permission to write to this file and volume
No cross-volume writing necessary from user perspective
What ansible does:
Writes a temporary file to /tmp
This is from their Ansible tmp dir setting, which is configurable
Tries to move this file from the tmp location to the mounted volume
The user never told them to do this
Permission error due to attempted cross-volume move
Moves can be allowed by making “unsafe”
Running inside of a container with mounted volumes is fundamental to the design of execution environments.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
Tested with ansible 2.9.15 and devel
Reproducer provided requires containers.podman collection
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<none>
```
##### OS / ENVIRONMENT
Redhat type linux with podman installed
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook requirements:
ansible-galaxy collection install containers.podman
Have podman installed locally
Run this playbook locally against localhost inventory:
https://github.com/AlanCoding/utility-playbooks/blob/master/issues/elijah.yml
```
ansible-playbook -i localhost, elijah.yml
```
This:
Starts a container, with `/elijah` (in the container) mounted to the `playbook_dir` (on host machine)
Templates a file outside the volume mount (successful)
Templates a file inside the volume mount (not successful)
Stops and removes the container
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Able to do basic file operations in execution environments (inside a container), using the host as a staging area
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2367, in atomic_move
os.rename(b_src, b_dest)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/file_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2409, in atomic_move
shutil.move(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2413, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
```
#### ADDITIONAL DETAILS
Ansible is creating the cross-volume write problem, and then requiring the user to solve it by setting “unsafe_writes”. A global setting as suggested by @bcoca in https://github.com/ansible/ansible/pull/73282 allows them to set it just once, which is better than on each task, or perhaps Tower could set it for them in our use case, but it is a broad sweeping selection that also applies to remote hosts, and that may be undesirable.
We would prefer a solution that doesn’t require the user to solve an implementation problem in ansible.
Hypothetically, the module could detect what volume the destination is in, and use a tmp stage file in that volume.
This is a tower_blocker because we need to replace our use of bwrap and execution environments are in the critical path. We also need users to be able to run their content that used to work in the new process isolation method, which uses containers. Tower setting a global variable that would affect both in-container as well as remote tasks is not acceptable because users security preferences may not allow them to set this for remote tasks, and as mentioned, their defined file operations do not explicitly implicate cross volume mounts. This is purely an ansible imposed situation.
Technical details:
This is a problem shared between many modules because the issue lies in Ansible core module utils
Specific problematic method is: https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328
The traceback (using Ansible devel) will be pasted below
@AlanCoding @chrismeyersfsu and @shanemcd are good contacts on the awx/tower team
|
https://github.com/ansible/ansible/issues/73310
|
https://github.com/ansible/ansible/pull/73282
|
2b0cd2c13f2021f839600d601f75dea2c0343ed1
|
c7d4acc12f672d1b3a86119940193b3324584ac0
| 2021-01-20T19:05:10Z |
python
| 2021-01-27T19:16:10Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
# Makes sure that systemd.journal has method sendv()
# Double check that journal has method sendv (some packages don't)
has_journal = hasattr(journal, 'sendv')
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ansible.module_utils.compat import selectors
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
get_unsupported_parameters,
get_type_validator,
handle_aliases,
list_deprecations,
list_no_log_values,
DEFAULT_TYPE_VALIDATORS,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(type='str'),
group=dict(type='str'),
seuser=dict(type='str'),
serole=dict(type='str'),
selevel=dict(type='str'),
setype=dict(type='str'),
attributes=dict(type='str', aliases=['attr']),
unsafe_writes=dict(type='bool', default=False), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, (datetime.datetime, datetime.date)):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more.
Use of deferred_removals exists, rather than a pure recursive solution,
because of the potential to hit the maximum recursion depth when dealing with
large amounts of data (see issue #24560).
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def _sanitize_keys_conditions(value, no_log_strings, ignore_keys, deferred_removals):
""" Helper method to sanitize_keys() to build deferred_removals and avoid deep recursion. """
if isinstance(value, (text_type, binary_type)):
return value
if isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
return new_value
if isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
return value
if isinstance(value, (datetime.datetime, datetime.date)):
return value
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
def sanitize_keys(obj, no_log_strings, ignore_keys=frozenset()):
""" Sanitize the keys in a container object by removing no_log values from key names.
This is a companion function to the `remove_values()` function. Similar to that function,
we make use of deferred_removals to avoid hitting maximum recursion depth in cases of
large data structures.
:param obj: The container object to sanitize. Non-container objects are returned unmodified.
:param no_log_strings: A set of string values we do not want logged.
:param ignore_keys: A set of string values of keys to not sanitize.
:returns: An object with sanitized keys.
"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _sanitize_keys_conditions(obj, no_log_strings, ignore_keys, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
if old_key in ignore_keys or old_key.startswith('_ansible'):
new_data[old_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
# Sanitize the old key. We take advantage of the sanitizing code in
# _remove_values_conditions() rather than recreating it here.
new_key = _remove_values_conditions(old_key, no_log_strings, None)
new_data[new_key] = _sanitize_keys_conditions(old_elem, no_log_strings, ignore_keys, deferred_removals)
else:
for elem in old_data:
new_elem = _sanitize_keys_conditions(elem, no_log_strings, ignore_keys, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from keys')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read the module documentation and install it in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._syslog_facility = 'LOG_USER'
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._set_internal_properties()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
# This is for backwards compatibility only.
self._CHECK_ARGUMENT_TYPES_DISPATCHER = DEFAULT_TYPE_VALIDATORS
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None, date=None, collection_name=None):
if version is not None and date is not None:
raise AssertionError("implementation error -- version and date must not both be set")
deprecate(msg, version=version, date=date, collection_name=collection_name)
# For compatibility, we accept that neither version nor date is set,
# and treat that the same as if version would haven been set
if date is not None:
self.log('[DEPRECATION WARNING] %s %s' % (msg, date))
else:
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params, path=None):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
Allows to overwrite the path/dest module argument by providing path.
'''
if path is None:
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if to_bytes(path_mount_point) == to_bytes(mount_point):
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (
errno.EACCES, # can't access symlink in sticky directory (stat)
errno.EPERM, # can't set mode on symbolic links (chmod)
errno.EROFS, # can't set mode on read-only filesystem
):
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path, include_version=False)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path, include_version=True):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
flags = '-vd' if include_version else '-d'
attrcmd = [attrcmd, flags, path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
attr_flags_idx = 0
if include_version:
attr_flags_idx = 1
output['version'] = res[0].strip()
output['attr_flags'] = res[attr_flags_idx].replace('-', '').strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'],
version=deprecation.get('version'), date=deprecation.get('date'),
collection_name=deprecation.get('collection_name'))
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], version=message.get('version'), date=message.get('date'),
collection_name=message.get('collection_name'))
def _set_internal_properties(self, argument_spec=None, module_parameters=None):
if argument_spec is None:
argument_spec = self.argument_spec
if module_parameters is None:
module_parameters = self.params
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in module_parameters:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(module_parameters[param_key]))
else:
setattr(self, PASS_VARS[k][0], module_parameters[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
unsupported_parameters = get_unsupported_parameters(spec, param, legal_inputs)
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
supported_parameters = list()
for key in sorted(spec.keys()):
if 'aliases' in spec[key] and spec[key]['aliases']:
supported_parameters.append("%s (%s)" % (key, ', '.join(sorted(spec[key]['aliases']))))
else:
supported_parameters.append(key)
msg += " Supported parameters include: %s" % (', '.join(supported_parameters))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value, param=None, prefix=''):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
from_msg = '{0!r}'.format(value)
to_msg = '{0!r}'.format(to_text(value))
if param is not None:
if prefix:
param = '{0}{1}'.format(prefix, param)
from_msg = '{0}: {1!r}'.format(param, value)
to_msg = '{0}: {1!r}'.format(param, to_text(value))
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value "{0}" (type {1.__class__.__name__}) was converted to "{2}" (type string). '
'If this does not look like what you expect, {3}').format(from_msg, value, to_msg, common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._set_internal_properties(spec, param)
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param, new_prefix)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
# Use the private method for 'str' type to handle the string conversion warning.
if wanted == 'str':
type_checker, wanted = self._check_type_str, 'str'
else:
type_checker, wanted = get_type_validator(wanted)
if type_checker is None:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(wanted, string_types):
if isinstance(param, string_types):
kwargs['param'] = param
elif isinstance(param, dict):
kwargs['param'] = list(param.keys())[0]
for value in values:
try:
validated_params.append(type_checker(value, **kwargs))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None, prefix=''):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
# Get param name for strings so we can later display this value in a useful error message if needed
# Only pass 'kwargs' to our checkers and ignore custom callable checkers
kwargs = {}
if wanted_name == 'str' and isinstance(type_checker, string_types):
kwargs['param'] = list(param.keys())[0]
# Get the name of the parent key if this is a nested option
if prefix:
kwargs['prefix'] = prefix
try:
param[k] = type_checker(value, **kwargs)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
try:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
except TypeError as e:
self.fail_json(
msg='Failed to log to syslog (%s). To proceed anyway, '
'disable syslog logging by setting no_target_syslog '
'to True in your Ansible config.' % to_native(e),
exception=traceback.format_exc(),
msg_to_log=msg,
)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version'), date=d.get('date'),
collection_name=d.get('collection_name'))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src, include_version=False)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp', dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp().
# Traceback would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)), exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)), exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None, ignore_invalid_cwd=True):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:kw ignore_invalid_cwd: This flag indicates whether an invalid ``cwd``
(non-existent or not a directory) should be ignored or should raise
an exception.
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
path = os.environ.get('PATH', '')
old_env_vals['PATH'] = path
if path:
os.environ['PATH'] = "%s:%s" % (path_prefix, path)
else:
os.environ['PATH'] = path_prefix
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd:
if os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not chdir to %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
elif not ignore_invalid_cwd:
self.fail_json(msg="Provided cwd is not a valid directory: %s" % cwd)
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b''
stderr = b''
try:
selector = selectors.DefaultSelector()
except (IOError, OSError):
# Failed to detect default selector for the given platform
# Select PollSelector which is supported by major platforms
selector = selectors.PollSelector()
selector.register(cmd.stdout, selectors.EVENT_READ)
selector.register(cmd.stderr, selectors.EVENT_READ)
if os.name == 'posix':
fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_SETFL, fcntl.fcntl(cmd.stderr.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
events = selector.select(1)
for key, event in events:
b_chunk = key.fileobj.read()
if b_chunk == b(''):
selector.unregister(key.fileobj)
if key.fileobj == cmd.stdout:
stdout += b_chunk
elif key.fileobj == cmd.stderr:
stderr += b_chunk
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not events or not selector.get_map()) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if no selectors are left
elif not selector.get_map() and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
selector.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, stdout=b'', stderr=b'', msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, stdout=b'', stderr=b'', msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,310 |
Difficult to use file-type modules inside of containers because of how module utils is implemented
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
A task running locally that uses a file-type module can encounter a traceback due to a “invalid cross-device link” when ansible is running inside of a container with mounted volumes. This is due to a common practice where a module writes to `ansible_remote_tmp` as a staging area, and then uses [module_utils](https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328) to copy it over.
The error happens when either `ansible_remote_tmp` is outside the volume and the destination is inside the volume, or vice versa.
This is an example, but it applies to other modules:
```
- name: write something to a file in the volume (this fails)
copy:
content: "foo_bar"
dest: "/elijah/file_for_elijah.txt"
```
Here, `/elijah` is the mounted volume.
What the task wants:
I have a string “foo_bar”
Write this string to a new file
This file is in a mounted volume
I have permission to write to this file and volume
No cross-volume writing necessary from user perspective
What ansible does:
Writes a temporary file to /tmp
This is from their Ansible tmp dir setting, which is configurable
Tries to move this file from the tmp location to the mounted volume
The user never told them to do this
Permission error due to attempted cross-volume move
Moves can be allowed by making “unsafe”
Running inside of a container with mounted volumes is fundamental to the design of execution environments.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
Tested with ansible 2.9.15 and devel
Reproducer provided requires containers.podman collection
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<none>
```
##### OS / ENVIRONMENT
Redhat type linux with podman installed
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook requirements:
ansible-galaxy collection install containers.podman
Have podman installed locally
Run this playbook locally against localhost inventory:
https://github.com/AlanCoding/utility-playbooks/blob/master/issues/elijah.yml
```
ansible-playbook -i localhost, elijah.yml
```
This:
Starts a container, with `/elijah` (in the container) mounted to the `playbook_dir` (on host machine)
Templates a file outside the volume mount (successful)
Templates a file inside the volume mount (not successful)
Stops and removes the container
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Able to do basic file operations in execution environments (inside a container), using the host as a staging area
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2367, in atomic_move
os.rename(b_src, b_dest)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/file_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2409, in atomic_move
shutil.move(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2413, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
```
#### ADDITIONAL DETAILS
Ansible is creating the cross-volume write problem, and then requiring the user to solve it by setting “unsafe_writes”. A global setting as suggested by @bcoca in https://github.com/ansible/ansible/pull/73282 allows them to set it just once, which is better than on each task, or perhaps Tower could set it for them in our use case, but it is a broad sweeping selection that also applies to remote hosts, and that may be undesirable.
We would prefer a solution that doesn’t require the user to solve an implementation problem in ansible.
Hypothetically, the module could detect what volume the destination is in, and use a tmp stage file in that volume.
This is a tower_blocker because we need to replace our use of bwrap and execution environments are in the critical path. We also need users to be able to run their content that used to work in the new process isolation method, which uses containers. Tower setting a global variable that would affect both in-container as well as remote tasks is not acceptable because users security preferences may not allow them to set this for remote tasks, and as mentioned, their defined file operations do not explicitly implicate cross volume mounts. This is purely an ansible imposed situation.
Technical details:
This is a problem shared between many modules because the issue lies in Ansible core module utils
Specific problematic method is: https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328
The traceback (using Ansible devel) will be pasted below
@AlanCoding @chrismeyersfsu and @shanemcd are good contacts on the awx/tower team
|
https://github.com/ansible/ansible/issues/73310
|
https://github.com/ansible/ansible/pull/73282
|
2b0cd2c13f2021f839600d601f75dea2c0343ed1
|
c7d4acc12f672d1b3a86119940193b3324584ac0
| 2021-01-20T19:05:10Z |
python
| 2021-01-27T19:16:10Z |
test/integration/targets/unsafe_writes/basic.yml
|
- hosts: testhost
gather_facts: false
vars:
testudir: '{{output_dir}}/unsafe_writes_test'
testufile: '{{testudir}}/unreplacablefile.txt'
tasks:
- name: test unsafe_writes on immutable dir (file cannot be atomically replaced)
block:
- name: create target dir
file: path={{testudir}} state=directory
- name: setup test file
copy: content=ORIGINAL dest={{testufile}}
- name: make target dir immutable (cannot write to file w/o unsafe_writes)
file: path={{testudir}} state=directory attributes="+i"
become: yes
ignore_errors: true
register: madeimmutable
- name: only run if immutable dir command worked, some of our test systems don't allow for it
when: madeimmutable is success
block:
- name: test this is actually immmutable working as we expect
file: path={{testufile}} state=absent
register: breakimmutable
ignore_errors: True
- name: only run if reallyh immutable dir
when: breakimmutable is failed
block:
- name: test overwriting file w/o unsafe
copy: content=NEW dest={{testufile}} unsafe_writes=False
ignore_errors: true
register: copy_without
- name: ensure we properly failed
assert:
that:
- copy_without is failed
- name: test overwriting file with unsafe
copy: content=NEW dest={{testufile}} unsafe_writes=True
register: copy_with
- name: ensure we properly changed
assert:
that:
- copy_with is changed
always:
- name: remove immutable flag from dir to prevent issues with cleanup
file: path={{testudir}} state=directory attributes="-i"
ignore_errors: true
become: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,310 |
Difficult to use file-type modules inside of containers because of how module utils is implemented
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
A task running locally that uses a file-type module can encounter a traceback due to a “invalid cross-device link” when ansible is running inside of a container with mounted volumes. This is due to a common practice where a module writes to `ansible_remote_tmp` as a staging area, and then uses [module_utils](https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328) to copy it over.
The error happens when either `ansible_remote_tmp` is outside the volume and the destination is inside the volume, or vice versa.
This is an example, but it applies to other modules:
```
- name: write something to a file in the volume (this fails)
copy:
content: "foo_bar"
dest: "/elijah/file_for_elijah.txt"
```
Here, `/elijah` is the mounted volume.
What the task wants:
I have a string “foo_bar”
Write this string to a new file
This file is in a mounted volume
I have permission to write to this file and volume
No cross-volume writing necessary from user perspective
What ansible does:
Writes a temporary file to /tmp
This is from their Ansible tmp dir setting, which is configurable
Tries to move this file from the tmp location to the mounted volume
The user never told them to do this
Permission error due to attempted cross-volume move
Moves can be allowed by making “unsafe”
Running inside of a container with mounted volumes is fundamental to the design of execution environments.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
Tested with ansible 2.9.15 and devel
Reproducer provided requires containers.podman collection
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
<none>
```
##### OS / ENVIRONMENT
Redhat type linux with podman installed
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Playbook requirements:
ansible-galaxy collection install containers.podman
Have podman installed locally
Run this playbook locally against localhost inventory:
https://github.com/AlanCoding/utility-playbooks/blob/master/issues/elijah.yml
```
ansible-playbook -i localhost, elijah.yml
```
This:
Starts a container, with `/elijah` (in the container) mounted to the `playbook_dir` (on host machine)
Templates a file outside the volume mount (successful)
Templates a file inside the volume mount (not successful)
Stops and removes the container
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Able to do basic file operations in execution environments (inside a container), using the host as a staging area
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2367, in atomic_move
os.rename(b_src, b_dest)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/file_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/shutil.py", line 550, in move
os.rename(src, real_dst)
OSError: [Errno 18] Invalid cross-device link: b'/root/.ansible/tmp/ansible-tmp-1611160471.6475317-1020701-11385651364745/source' -> b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2409, in atomic_move
shutil.move(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 564, in move
copy_function(src, real_dst)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/ansible_ansible.legacy.copy_payload_ntnxbqce/ansible_ansible.legacy.copy_payload.zip/ansible/module_utils/basic.py", line 2413, in atomic_move
shutil.copy2(b_src, b_tmp_dest_name)
File "/usr/lib64/python3.6/shutil.py", line 264, in copy2
copystat(src, dst, follow_symlinks=follow_symlinks)
File "/usr/lib64/python3.6/shutil.py", line 229, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib64/python3.6/shutil.py", line 165, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
PermissionError: [Errno 13] Permission denied: b'/elijah/.ansible_tmp7di06c3ffile_for_elijah.txt'
```
#### ADDITIONAL DETAILS
Ansible is creating the cross-volume write problem, and then requiring the user to solve it by setting “unsafe_writes”. A global setting as suggested by @bcoca in https://github.com/ansible/ansible/pull/73282 allows them to set it just once, which is better than on each task, or perhaps Tower could set it for them in our use case, but it is a broad sweeping selection that also applies to remote hosts, and that may be undesirable.
We would prefer a solution that doesn’t require the user to solve an implementation problem in ansible.
Hypothetically, the module could detect what volume the destination is in, and use a tmp stage file in that volume.
This is a tower_blocker because we need to replace our use of bwrap and execution environments are in the critical path. We also need users to be able to run their content that used to work in the new process isolation method, which uses containers. Tower setting a global variable that would affect both in-container as well as remote tasks is not acceptable because users security preferences may not allow them to set this for remote tasks, and as mentioned, their defined file operations do not explicitly implicate cross volume mounts. This is purely an ansible imposed situation.
Technical details:
This is a problem shared between many modules because the issue lies in Ansible core module utils
Specific problematic method is: https://github.com/ansible/ansible/blob/42bc03f0f5740f2340fcdbe75557920552622ac3/lib/ansible/module_utils/basic.py#L2328
The traceback (using Ansible devel) will be pasted below
@AlanCoding @chrismeyersfsu and @shanemcd are good contacts on the awx/tower team
|
https://github.com/ansible/ansible/issues/73310
|
https://github.com/ansible/ansible/pull/73282
|
2b0cd2c13f2021f839600d601f75dea2c0343ed1
|
c7d4acc12f672d1b3a86119940193b3324584ac0
| 2021-01-20T19:05:10Z |
python
| 2021-01-27T19:16:10Z |
test/integration/targets/unsafe_writes/runme.sh
|
#!/usr/bin/env bash
set -eux
ansible-playbook basic.yml -i ../../inventory -e "output_dir=${OUTPUT_DIR}" "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,385 |
user module: how-to create password hash link is broken
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
In the user module, in the docs for the password field, there is a link to instructions on how to create a password hash for the module to consume. Said link is broken, and gives a 404.
Cf. [the corresponding line in the user module](https://github.com/ansible/ansible/blob/2b0cd2c13f2021f839600d601f75dea2c0343ed1/lib/ansible/modules/user.py#L92)
> HINT: Did you know the documentation has an "Edit on GitHub" link on every page ?
this one doesn't, or i just could not find it : /
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
latest
```
|
https://github.com/ansible/ansible/issues/73385
|
https://github.com/ansible/ansible/pull/73353
|
c7d4acc12f672d1b3a86119940193b3324584ac0
|
11398aac09efbd3d039854ab6c3816b7b266479e
| 2021-01-27T10:19:20Z |
python
| 2021-01-27T19:58:23Z |
lib/ansible/modules/user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to. When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a disabled account on Linux systems, set this to C('!') or C('*').
- To create a disabled account on OpenBSD, set this to C('*************').
- See U(https://docs.ansible.com/ansible/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
type: int
default: default set by ssh-keygen
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
- Supports C(check_mode).
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow'
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release <= 5 or self.local:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = 'C'
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1)
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
else:
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
else:
if len(lines) == 2:
return lines[1].strip()
else:
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-list', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,612 |
iptables module broken
|
##### SUMMARY
Module triggers an unsupported syntax with iptables executable.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
iptables
##### ANSIBLE VERSION
```
ansible 2.9.6
config file = -SNIP-/ansible.cfg
configured module search path = ['-SNIP-/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.2 (default, Mar 11 2020, 15:23:03) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
DEFAULT_BECOME = True
HOST_KEY_CHECKING = False
INTERPRETER_PYTHON = auto_silent
```
##### OS / ENVIRONMENT
OS Ansible is executed in: MacOSX
Destination OS: SUSE Linux Enterprise Server 12 SP5
##### STEPS TO REPRODUCE
Play:
```yaml
- name: create ajp port whitelist
iptables:
action: insert
chain: INPUT
comment: See Ticket123
destination_port: '8009'
policy: ACCEPT
protocol: tcp
source: '172.18.0.2'
```
##### EXPECTED RESULTS
Not to receive an error.
##### ACTUAL RESULTS
When executed, this doesn't work:
```
failed: [-SNIP-] (item=172.18.0.2) => {"ansible_loop_var": "item", "changed": false, "cmd": "/usr/sbin/iptables -t filter -L INPUT -p tcp -s 172.18.0.2 --destination-port 8009 -m comment --comment 'See Ticket123'", "item": "172.18.0.2", "msg": "iptables v1.4.21: Illegal option `-s' with this command\n\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.4.21: Illegal option `-s' with this command\n\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.4.21: Illegal option `-s' with this command", "", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
|
https://github.com/ansible/ansible/issues/68612
|
https://github.com/ansible/ansible/pull/69152
|
11398aac09efbd3d039854ab6c3816b7b266479e
|
82b74f7fd770027ea5c903f57056f4742cac90e3
| 2020-04-01T11:34:34Z |
python
| 2021-01-27T20:24:53Z |
changelogs/fragments/68612_iptables.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,612 |
iptables module broken
|
##### SUMMARY
Module triggers an unsupported syntax with iptables executable.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
iptables
##### ANSIBLE VERSION
```
ansible 2.9.6
config file = -SNIP-/ansible.cfg
configured module search path = ['-SNIP-/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.2 (default, Mar 11 2020, 15:23:03) [Clang 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
DEFAULT_BECOME = True
HOST_KEY_CHECKING = False
INTERPRETER_PYTHON = auto_silent
```
##### OS / ENVIRONMENT
OS Ansible is executed in: MacOSX
Destination OS: SUSE Linux Enterprise Server 12 SP5
##### STEPS TO REPRODUCE
Play:
```yaml
- name: create ajp port whitelist
iptables:
action: insert
chain: INPUT
comment: See Ticket123
destination_port: '8009'
policy: ACCEPT
protocol: tcp
source: '172.18.0.2'
```
##### EXPECTED RESULTS
Not to receive an error.
##### ACTUAL RESULTS
When executed, this doesn't work:
```
failed: [-SNIP-] (item=172.18.0.2) => {"ansible_loop_var": "item", "changed": false, "cmd": "/usr/sbin/iptables -t filter -L INPUT -p tcp -s 172.18.0.2 --destination-port 8009 -m comment --comment 'See Ticket123'", "item": "172.18.0.2", "msg": "iptables v1.4.21: Illegal option `-s' with this command\n\nTry `iptables -h' or 'iptables --help' for more information.", "rc": 2, "stderr": "iptables v1.4.21: Illegal option `-s' with this command\n\nTry `iptables -h' or 'iptables --help' for more information.\n", "stderr_lines": ["iptables v1.4.21: Illegal option `-s' with this command", "", "Try `iptables -h' or 'iptables --help' for more information."], "stdout": "", "stdout_lines": []}
```
|
https://github.com/ansible/ansible/issues/68612
|
https://github.com/ansible/ansible/pull/69152
|
11398aac09efbd3d039854ab6c3816b7b266479e
|
82b74f7fd770027ea5c903f57056f4742cac90e3
| 2020-04-01T11:34:34Z |
python
| 2021-01-27T20:24:53Z |
lib/ansible/modules/iptables.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- C(iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
notes:
- This module just deals with individual rules.If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with C(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
C(INPUT), C(FORWARD), C(OUTPUT), C(PREROUTING), C(POSTROUTING), C(SECMARK) or C(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of C(tcp), C(udp), C(udplite), C(icmp), C(ipv6-icmp) or C(icmpv6),
C(esp), C(ah), C(sctp) or the special keyword C(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from I(/etc/protocols) is also allowed.
- A C(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- C(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- C(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
default: {}
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
elements: str
flags_set:
description:
- Flags to be set.
type: list
elements: str
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
elements: str
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when C(jump) is set to C(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if C(jump) is set to C(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the C(INPUT), C(FORWARD) and C(PREROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the C(FORWARD), C(OUTPUT) and C(POSTROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during C(INSERT), C(APPEND), C(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, C(0) is assumed; if the last is omitted, C(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
destination_ports:
description:
- This specifies multiple destination port numbers or port ranges to match in the multiport module.
- It can only be used in conjunction with the protocols tcp, udp, udplite, dccp and sctp.
type: list
elements: str
version_added: "2.11"
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
C(tcp), C(udp), C(dccp) or C(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with C(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with C(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- A list of the connection states to match in the conntrack module.
- Possible values are C(INVALID), C(NEW), C(ESTABLISHED), C(RELATED), C(UNTRACKED), C(SNAT), C(DNAT).
type: list
elements: str
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
match_set:
description:
- Specifies a set name which can be defined by ipset.
- Must be used together with the match_set_flags parameter.
- When the C(!) argument is prepended then it inverts the rule.
- Uses the iptables set extension.
type: str
version_added: "2.11"
match_set_flags:
description:
- Specifies the necessary flags for the match_set parameter.
- Must be used together with the match_set parameter.
- Uses the iptables set extension.
type: str
choices: [ "src", "dst", "src,dst", "dst,src" ]
version_added: "2.11"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using `/second', `/minute',
`/hour' or `/day', or parts of them (so `5/second' is the same as
`5/s').
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT".'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
default: false
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the C(chain) parameter.
- Ignores all other parameters.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
'''
EXAMPLES = r'''
- name: Block specific IP
ansible.builtin.iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
ansible.builtin.iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
ansible.builtin.iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH)
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
ansible.builtin.iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Allow source IPs defined in ipset "admin_hosts" on port 22
ansible.builtin.iptables:
chain: INPUT
match_set: admin_hosts
match_set_flags: src
destination_port: 22
jump: ALLOW
- name: Tag all outbound tcp packets with DSCP mark 8
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
- name: Insert a rule on line 5
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
- name: Set the policy for the INPUT chain to DROP
ansible.builtin.iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
ansible.builtin.iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: Iptables flush filter
ansible.builtin.iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Iptables flush nat
ansible.builtin.iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
ansible.builtin.iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
- name: Allow connections on multiple ports
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_ports:
- "80"
- "443"
- "8081:8083"
jump: ACCEPT
'''
import re
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_match(rule, params['destination_ports'], 'multiport')
append_csv(rule, params['destination_ports'], '--dports')
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
if 'set' in params['match']:
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
elif params['match_set']:
append_match(rule, params['match_set'], 'set')
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params)
rc, out, _ = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, _ = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', elements='str', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list', elements='str'),
flags_set=dict(type='list', elements='str'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
destination_ports=dict(type='list', elements='str', default=[]),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', elements='str', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
match_set=dict(type='str'),
match_set_flags=dict(type='str', choices=['src', 'dst', 'src,dst', 'dst,src']),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_present(iptables_path, module, module.params)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Check only; don't modify
if not module.check_mode:
if should_be_present:
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,998 |
apt_key module fails to download the key from the URL
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
apt_key module fails to download keys from username/password protected address if response from that address redirects to a different url
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt_key
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Ansible is running on Fedora 32, target system was Debian Buster.
##### STEPS TO REPRODUCE
On apt system run
$ ansible-playbook playbook.yaml
where *playbook.yaml* contains:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
become: true
tasks:
- apt_key:
url: https://read-token:@packagecloud.io/repo/channel/gpgkey
```
##### EXPECTED RESULTS
The apt_key module should download the key and add it to the system.
##### ACTUAL RESULTS
The key file is not downloaded.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [sensu.sensu_go.install : Add apt key] *************************************************************************
task path: /vagrant/ansible_collections/sensu/sensu_go/roles/install/tasks/apt/prepare.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c 'echo ~vagrant && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 && echo ansible-tmp-1596123703.15-7698-147352890428496="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 `" ) && sleep 0'
Using module file /home/vagrant/venv/local/lib/python2.7/site-packages/ansible/modules/packaging/os/apt_key.py
<127.0.0.1> PUT /home/vagrant/.ansible/tmp/ansible-local-73130sQlx2/tmpTka0Og TO /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-arcvvjrkaonnmhuagbribxaggrraarss ; /home/vagrant/venv/bin/python2 /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"data": null,
"file": null,
"id": null,
"key": null,
"keyring": null,
"keyserver": null,
"state": "present",
"url": "https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey",
"validate_certs": true
}
},
"msg": "Failed to download key at https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey: HTTP Error 401: Unauthorized"
}
```
##### ADDITIONAL INFO
Curlng the url prints:
```
$ curl -v https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey
...
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 302
< date: Thu, 30 Jul 2020 15:49:16 GMT
< content-type: text/html;charset=utf-8
< content-length: 0
< location: https://d18dy6y14fq314.cloudfront.net/188/13711/gpg/sensu-release-candidate-44E6C781E1E7FF69.pub.gpg?t=<redacted>
...
```
If we add the `-L` switch to `curl`, the file is downloaded just fine. I can also get module to download the file if I force the basic authentication in the `ansible.module_utils.urls.Request.open(...)` call, but apt_key module does not have an argument that would control this parameter.
|
https://github.com/ansible/ansible/issues/70998
|
https://github.com/ansible/ansible/pull/73334
|
595413d11346b6f26bb3d9df2d8e05f2747508a3
|
5aa4295d74f5fc1dd7bbc1a311af85ad963e38d8
| 2020-07-30T15:53:28Z |
python
| 2021-01-27T23:40:58Z |
changelogs/fragments/apt_key_fixes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,998 |
apt_key module fails to download the key from the URL
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
apt_key module fails to download keys from username/password protected address if response from that address redirects to a different url
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt_key
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Ansible is running on Fedora 32, target system was Debian Buster.
##### STEPS TO REPRODUCE
On apt system run
$ ansible-playbook playbook.yaml
where *playbook.yaml* contains:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
become: true
tasks:
- apt_key:
url: https://read-token:@packagecloud.io/repo/channel/gpgkey
```
##### EXPECTED RESULTS
The apt_key module should download the key and add it to the system.
##### ACTUAL RESULTS
The key file is not downloaded.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [sensu.sensu_go.install : Add apt key] *************************************************************************
task path: /vagrant/ansible_collections/sensu/sensu_go/roles/install/tasks/apt/prepare.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c 'echo ~vagrant && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 && echo ansible-tmp-1596123703.15-7698-147352890428496="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 `" ) && sleep 0'
Using module file /home/vagrant/venv/local/lib/python2.7/site-packages/ansible/modules/packaging/os/apt_key.py
<127.0.0.1> PUT /home/vagrant/.ansible/tmp/ansible-local-73130sQlx2/tmpTka0Og TO /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-arcvvjrkaonnmhuagbribxaggrraarss ; /home/vagrant/venv/bin/python2 /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"data": null,
"file": null,
"id": null,
"key": null,
"keyring": null,
"keyserver": null,
"state": "present",
"url": "https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey",
"validate_certs": true
}
},
"msg": "Failed to download key at https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey: HTTP Error 401: Unauthorized"
}
```
##### ADDITIONAL INFO
Curlng the url prints:
```
$ curl -v https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey
...
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 302
< date: Thu, 30 Jul 2020 15:49:16 GMT
< content-type: text/html;charset=utf-8
< content-length: 0
< location: https://d18dy6y14fq314.cloudfront.net/188/13711/gpg/sensu-release-candidate-44E6C781E1E7FF69.pub.gpg?t=<redacted>
...
```
If we add the `-L` switch to `curl`, the file is downloaded just fine. I can also get module to download the file if I force the basic authentication in the `ansible.module_utils.urls.Request.open(...)` call, but apt_key module does not have an argument that would control this parameter.
|
https://github.com/ansible/ansible/issues/70998
|
https://github.com/ansible/ansible/pull/73334
|
595413d11346b6f26bb3d9df2d8e05f2747508a3
|
5aa4295d74f5fc1dd7bbc1a311af85ad963e38d8
| 2020-07-30T15:53:28Z |
python
| 2021-01-27T23:40:58Z |
lib/ansible/modules/apt_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2012, Jayson Vantuyl <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt_key
author:
- Jayson Vantuyl (@jvantuyl)
version_added: "1.0"
short_description: Add or remove an apt key
description:
- Add or remove an I(apt) key, optionally downloading it.
notes:
- Doesn't download the key unless it really needs it.
- As a sanity check, downloaded key id must match the one specified.
- "Use full fingerprint (40 characters) key ids to avoid key collisions.
To generate a full-fingerprint imported key: C(apt-key adv --list-public-keys --with-fingerprint --with-colons)."
- If you specify both the key id and the URL with C(state=present), the task can verify or add the key as needed.
- Adding a new key requires an apt cache update (e.g. using the M(ansible.builtin.apt) module's update_cache option).
- Supports C(check_mode).
requirements:
- gpg
options:
id:
description:
- The identifier of the key.
- Including this allows check mode to correctly report the changed state.
- If specifying a subkey's id be aware that apt-key does not understand how to remove keys via a subkey id. Specify the primary key's id instead.
- This parameter is required when C(state) is set to C(absent).
type: str
data:
description:
- The keyfile contents to add to the keyring.
type: str
file:
description:
- The path to a keyfile on the remote server to add to the keyring.
type: path
keyring:
description:
- The full path to specific keyring file in C(/etc/apt/trusted.gpg.d/).
type: path
version_added: "1.3"
url:
description:
- The URL to retrieve key from.
type: str
keyserver:
description:
- The keyserver to retrieve key from.
type: str
version_added: "1.6"
state:
description:
- Ensures that the key is present (added) or absent (revoked).
type: str
choices: [ absent, present ]
default: present
validate_certs:
description:
- If C(no), SSL certificates for the target url will not be validated. This should only be used
on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
'''
EXAMPLES = '''
- name: Add an apt key by id from a keyserver
ansible.builtin.apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: Add an Apt signing key, uses whichever key is at the URL
ansible.builtin.apt_key:
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Add an Apt signing key, will not download if present
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
state: present
- name: Remove a Apt specific signing key, leading 0x is valid
ansible.builtin.apt_key:
id: 0x9FED2BCBDCD29CDF762678CBAED4B06F473041FA
state: absent
# Use armored file since utf-8 string is expected. Must be of "PGP PUBLIC KEY BLOCK" type.
- name: Add a key from a file on the Ansible server
ansible.builtin.apt_key:
data: "{{ lookup('file', 'apt.asc') }}"
state: present
- name: Add an Apt signing key to a specific keyring file
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
url: https://ftp-master.debian.org/keys/archive-key-6.0.asc
keyring: /etc/apt/trusted.gpg.d/debian.gpg
- name: Add Apt signing key on remote server to keyring
ansible.builtin.apt_key:
id: 9FED2BCBDCD29CDF762678CBAED4B06F473041FA
file: /tmp/apt.gpg
state: present
'''
RETURN = '''#'''
# FIXME: standardize into module_common
from traceback import format_exc
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url
apt_key_bin = None
def find_needed_binaries(module):
global apt_key_bin
apt_key_bin = module.get_bin_path('apt-key', required=True)
# FIXME: Is there a reason that gpg and grep are checked? Is it just
# cruft or does the apt .deb package not require them (and if they're not
# installed, /usr/bin/apt-key fails?)
module.get_bin_path('gpg', required=True)
module.get_bin_path('grep', required=True)
def parse_key_id(key_id):
"""validate the key_id and break it into segments
:arg key_id: The key_id as supplied by the user. A valid key_id will be
8, 16, or more hexadecimal chars with an optional leading ``0x``.
:returns: The portion of key_id suitable for apt-key del, the portion
suitable for comparisons with --list-public-keys, and the portion that
can be used with --recv-key. If key_id is long enough, these will be
the last 8 characters of key_id, the last 16 characters, and all of
key_id. If key_id is not long enough, some of the values will be the
same.
* apt-key del <= 1.10 has a bug with key_id != 8 chars
* apt-key adv --list-public-keys prints 16 chars
* apt-key adv --recv-key can take more chars
"""
# Make sure the key_id is valid hexadecimal
int(key_id, 16)
key_id = key_id.upper()
if key_id.startswith('0X'):
key_id = key_id[2:]
key_id_len = len(key_id)
if (key_id_len != 8 and key_id_len != 16) and key_id_len <= 16:
raise ValueError('key_id must be 8, 16, or 16+ hexadecimal characters in length')
short_key_id = key_id[-8:]
fingerprint = key_id
if key_id_len > 16:
fingerprint = key_id[-16:]
return short_key_id, fingerprint, key_id
def all_keys(module, keyring, short_format):
if keyring:
cmd = "%s --keyring %s adv --list-public-keys --keyid-format=long" % (apt_key_bin, keyring)
else:
cmd = "%s adv --list-public-keys --keyid-format=long" % apt_key_bin
(rc, out, err) = module.run_command(cmd)
results = []
lines = to_native(out).split('\n')
for line in lines:
if (line.startswith("pub") or line.startswith("sub")) and "expired" not in line:
tokens = line.split()
code = tokens[1]
(len_type, real_code) = code.split("/")
results.append(real_code)
if short_format:
results = shorten_key_ids(results)
return results
def shorten_key_ids(key_id_list):
"""
Takes a list of key ids, and converts them to the 'short' format,
by reducing them to their last 8 characters.
"""
short = []
for key in key_id_list:
short.append(key[-8:])
return short
def download_key(module, url):
# FIXME: move get_url code to common, allow for in-memory D/L, support proxies
# and reuse here
if url is None:
module.fail_json(msg="needed a URL but was not specified")
try:
rsp, info = fetch_url(module, url)
if info['status'] != 200:
module.fail_json(msg="Failed to download key at %s: %s" % (url, info['msg']))
return rsp.read()
except Exception:
module.fail_json(msg="error getting key id from url: %s" % url, traceback=format_exc())
def import_key(module, keyring, keyserver, key_id):
if keyring:
cmd = "%s --keyring %s adv --no-tty --keyserver %s --recv %s" % (apt_key_bin, keyring, keyserver, key_id)
else:
cmd = "%s adv --no-tty --keyserver %s --recv %s" % (apt_key_bin, keyserver, key_id)
for retry in range(5):
lang_env = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')
(rc, out, err) = module.run_command(cmd, environ_update=lang_env)
if rc == 0:
break
else:
# Out of retries
if rc == 2 and 'not found on keyserver' in out:
msg = 'Key %s not found on keyserver %s' % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg)
else:
msg = "Error fetching key %s from keyserver: %s" % (key_id, keyserver)
module.fail_json(cmd=cmd, msg=msg, rc=rc, stdout=out, stderr=err)
return True
def add_key(module, keyfile, keyring, data=None):
if data is not None:
if keyring:
cmd = "%s --keyring %s add -" % (apt_key_bin, keyring)
else:
cmd = "%s add -" % apt_key_bin
(rc, out, err) = module.run_command(cmd, data=data, check_rc=True, binary_data=True)
else:
if keyring:
cmd = "%s --keyring %s add %s" % (apt_key_bin, keyring, keyfile)
else:
cmd = "%s add %s" % (apt_key_bin, keyfile)
(rc, out, err) = module.run_command(cmd, check_rc=True)
return True
def remove_key(module, key_id, keyring):
# FIXME: use module.run_command, fail at point of error and don't discard useful stdin/stdout
if keyring:
cmd = '%s --keyring %s del %s' % (apt_key_bin, keyring, key_id)
else:
cmd = '%s del %s' % (apt_key_bin, key_id)
(rc, out, err) = module.run_command(cmd, check_rc=True)
return True
def main():
module = AnsibleModule(
argument_spec=dict(
id=dict(type='str'),
url=dict(type='str'),
data=dict(type='str'),
file=dict(type='path'),
key=dict(type='str', removed_in_version='2.14', removed_from_collection='ansible.builtin'),
keyring=dict(type='path'),
validate_certs=dict(type='bool', default=True),
keyserver=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
),
supports_check_mode=True,
mutually_exclusive=(('data', 'file', 'keyserver', 'url'),),
)
key_id = module.params['id']
url = module.params['url']
data = module.params['data']
filename = module.params['file']
keyring = module.params['keyring']
state = module.params['state']
keyserver = module.params['keyserver']
changed = False
fingerprint = short_key_id = key_id
short_format = False
if key_id:
try:
short_key_id, fingerprint, key_id = parse_key_id(key_id)
except ValueError:
module.fail_json(msg='Invalid key_id', id=key_id)
if len(fingerprint) == 8:
short_format = True
find_needed_binaries(module)
keys = all_keys(module, keyring, short_format)
return_values = {}
if state == 'present':
if fingerprint and fingerprint in keys:
module.exit_json(changed=False)
elif fingerprint and fingerprint not in keys and module.check_mode:
# TODO: Someday we could go further -- write keys out to
# a temporary file and then extract the key id from there via gpg
# to decide if the key is installed or not.
module.exit_json(changed=True)
else:
if not filename and not data and not keyserver:
data = download_key(module, url)
if filename:
add_key(module, filename, keyring)
elif keyserver:
import_key(module, keyring, keyserver, key_id)
else:
add_key(module, "-", keyring, data)
changed = False
keys2 = all_keys(module, keyring, short_format)
if len(keys) != len(keys2):
changed = True
if fingerprint and fingerprint not in keys2:
module.fail_json(msg="key does not seem to have been added", id=key_id)
module.exit_json(changed=changed)
elif state == 'absent':
if not key_id:
module.fail_json(msg="key is required")
if fingerprint in keys:
if module.check_mode:
module.exit_json(changed=True)
# we use the "short" id: key_id[-8:], short_format=True
# it's a workaround for https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1481871
if remove_key(module, short_key_id, keyring):
keys = all_keys(module, keyring, short_format)
if fingerprint in keys:
module.fail_json(msg="apt-key del did not return an error but the key was not removed (check that the id is correct and *not* a subkey)",
id=key_id)
changed = True
else:
# FIXME: module.fail_json or exit-json immediately at point of failure
module.fail_json(msg="error removing key_id", **return_values)
module.exit_json(changed=changed, **return_values)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,998 |
apt_key module fails to download the key from the URL
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
apt_key module fails to download keys from username/password protected address if response from that address redirects to a different url
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt_key
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Ansible is running on Fedora 32, target system was Debian Buster.
##### STEPS TO REPRODUCE
On apt system run
$ ansible-playbook playbook.yaml
where *playbook.yaml* contains:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
become: true
tasks:
- apt_key:
url: https://read-token:@packagecloud.io/repo/channel/gpgkey
```
##### EXPECTED RESULTS
The apt_key module should download the key and add it to the system.
##### ACTUAL RESULTS
The key file is not downloaded.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [sensu.sensu_go.install : Add apt key] *************************************************************************
task path: /vagrant/ansible_collections/sensu/sensu_go/roles/install/tasks/apt/prepare.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c 'echo ~vagrant && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 && echo ansible-tmp-1596123703.15-7698-147352890428496="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 `" ) && sleep 0'
Using module file /home/vagrant/venv/local/lib/python2.7/site-packages/ansible/modules/packaging/os/apt_key.py
<127.0.0.1> PUT /home/vagrant/.ansible/tmp/ansible-local-73130sQlx2/tmpTka0Og TO /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-arcvvjrkaonnmhuagbribxaggrraarss ; /home/vagrant/venv/bin/python2 /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"data": null,
"file": null,
"id": null,
"key": null,
"keyring": null,
"keyserver": null,
"state": "present",
"url": "https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey",
"validate_certs": true
}
},
"msg": "Failed to download key at https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey: HTTP Error 401: Unauthorized"
}
```
##### ADDITIONAL INFO
Curlng the url prints:
```
$ curl -v https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey
...
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 302
< date: Thu, 30 Jul 2020 15:49:16 GMT
< content-type: text/html;charset=utf-8
< content-length: 0
< location: https://d18dy6y14fq314.cloudfront.net/188/13711/gpg/sensu-release-candidate-44E6C781E1E7FF69.pub.gpg?t=<redacted>
...
```
If we add the `-L` switch to `curl`, the file is downloaded just fine. I can also get module to download the file if I force the basic authentication in the `ansible.module_utils.urls.Request.open(...)` call, but apt_key module does not have an argument that would control this parameter.
|
https://github.com/ansible/ansible/issues/70998
|
https://github.com/ansible/ansible/pull/73334
|
595413d11346b6f26bb3d9df2d8e05f2747508a3
|
5aa4295d74f5fc1dd7bbc1a311af85ad963e38d8
| 2020-07-30T15:53:28Z |
python
| 2021-01-27T23:40:58Z |
test/integration/targets/apt_key/tasks/apt_key.yml
|
- name: run first docs example
apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
register: apt_key_test0
- debug: var=apt_key_test0
- name: re-run first docs example
apt_key:
keyserver: keyserver.ubuntu.com
id: 36A1D7869245C8950F966E92D8576A8BA88D21E9
register: apt_key_test1
- name: validate results
assert:
that:
- 'apt_key_test0.changed is defined'
- 'apt_key_test0.changed'
- 'not apt_key_test1.changed'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,998 |
apt_key module fails to download the key from the URL
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
apt_key module fails to download keys from username/password protected address if response from that address redirects to a different url
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
apt_key
##### ANSIBLE VERSION
```
ansible 2.9.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tadej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Ansible is running on Fedora 32, target system was Debian Buster.
##### STEPS TO REPRODUCE
On apt system run
$ ansible-playbook playbook.yaml
where *playbook.yaml* contains:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
become: true
tasks:
- apt_key:
url: https://read-token:@packagecloud.io/repo/channel/gpgkey
```
##### EXPECTED RESULTS
The apt_key module should download the key and add it to the system.
##### ACTUAL RESULTS
The key file is not downloaded.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [sensu.sensu_go.install : Add apt key] *************************************************************************
task path: /vagrant/ansible_collections/sensu/sensu_go/roles/install/tasks/apt/prepare.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: vagrant
<127.0.0.1> EXEC /bin/sh -c 'echo ~vagrant && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 && echo ansible-tmp-1596123703.15-7698-147352890428496="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496 `" ) && sleep 0'
Using module file /home/vagrant/venv/local/lib/python2.7/site-packages/ansible/modules/packaging/os/apt_key.py
<127.0.0.1> PUT /home/vagrant/.ansible/tmp/ansible-local-73130sQlx2/tmpTka0Og TO /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-arcvvjrkaonnmhuagbribxaggrraarss ; /home/vagrant/venv/bin/python2 /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/AnsiballZ_apt_key.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1596123703.15-7698-147352890428496/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"data": null,
"file": null,
"id": null,
"key": null,
"keyring": null,
"keyserver": null,
"state": "present",
"url": "https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey",
"validate_certs": true
}
},
"msg": "Failed to download key at https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey: HTTP Error 401: Unauthorized"
}
```
##### ADDITIONAL INFO
Curlng the url prints:
```
$ curl -v https://<redacted>:@packagecloud.io/sensu/release-candidate/gpgkey
...
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 302
< date: Thu, 30 Jul 2020 15:49:16 GMT
< content-type: text/html;charset=utf-8
< content-length: 0
< location: https://d18dy6y14fq314.cloudfront.net/188/13711/gpg/sensu-release-candidate-44E6C781E1E7FF69.pub.gpg?t=<redacted>
...
```
If we add the `-L` switch to `curl`, the file is downloaded just fine. I can also get module to download the file if I force the basic authentication in the `ansible.module_utils.urls.Request.open(...)` call, but apt_key module does not have an argument that would control this parameter.
|
https://github.com/ansible/ansible/issues/70998
|
https://github.com/ansible/ansible/pull/73334
|
595413d11346b6f26bb3d9df2d8e05f2747508a3
|
5aa4295d74f5fc1dd7bbc1a311af85ad963e38d8
| 2020-07-30T15:53:28Z |
python
| 2021-01-27T23:40:58Z |
test/integration/targets/apt_key/tasks/file.yml
|
- name: Get Fedora GPG Key
get_url:
url: https://getfedora.org/static/fedora.gpg
dest: /tmp/fedora.gpg
- name: Run apt_key with both file and keyserver
apt_key:
file: /tmp/fedora.gpg
keyserver: keys.gnupg.net
id: 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
register: both_file_keyserver
ignore_errors: true
- name: Run apt_key with file only
apt_key:
file: /tmp/fedora.gpg
register: only_file
- name: Run apt_key with keyserver only
apt_key:
keyserver: keys.gnupg.net
id: 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
register: only_keyserver
- name: validate results
assert:
that:
- 'both_file_keyserver is failed'
- 'only_file.changed'
- 'not only_keyserver.changed'
- name: remove fedora.gpg
apt_key:
id: 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
state: absent
register: remove_fedora
- name: add key from url
apt_key:
url: https://getfedora.org/static/fedora.gpg
register: apt_key_url
- name: verify key from url
assert:
that:
- remove_fedora is changed
- apt_key_url is changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,966 |
Galaxy Fetch Fails When Tilde (~) Is Part of Dependency Filename
|
Bug Report
---------------
**SUMMARY**
When a file in a role has a tilde (~) in the name, the ansible-galaxy pull fails.
**COMPONENT NAME**
lib/ansible/cli/galaxy.py
**ANSIBLE VERSION**
~~~
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
~~~
**STEPS TO REPRODUCE**
When filename does not contain a tilde (~):
```
[root@0b37f81e0ac0 galaxy]# ansible-galaxy install -r requirements.yml -vvv
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8920yXDh66/tmp7Myv3l.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
- tester was installed successfully
[WARNING]: Meta file /root/.ansible/roles/tester is empty. Skipping dependencies.
```
When filename contains a tilde (~):
```
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8984bfuVM3/tmp_wXxbO.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
ERROR! Unexpected Exception, this is probably a bug: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 891, in execute_install
installed = role.install()
File "/usr/lib/python2.7/site-packages/ansible/galaxy/role.py", line 330, in install
role_tar_file.extract(member, self.path)
File "/usr/lib64/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib64/python2.7/tarfile.py", line 2160, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib64/python2.7/tarfile.py", line 2200, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
```
**EXPECTED RESULTS**
I would think roles should be able to accommodate tildes in the filenames.
See: ansible/galaxy#704
See: https://github.com/ansible/ansible/issues/71624
Here's the structure of a local Git clone which causes the problem:
~~~
(galaxy) machine@machine1:~/Downloads/galaxy/arole$ tree
.
├── files
│ ├── XXX-account.sh
│ └── group-XXXXXX~XXX-XXXXX-XX
├── meta
│ └── main.yml
└── README.md
2 directories, 4 files
~~~
Here's the ansible-galaxy output when a ~ is present in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-912do8enx7f/tmpp4ii2gwc.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
[WARNING]: - arole was NOT installed successfully: Could not update files in /root/.ansible/roles/arole: [Errno 21] Is a directory: '/root/.ansible/roles/arole/files'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
~~~
Here's the ansible-galaxy output when a ~ is not in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-927j9gf3cer/tmpsrn81ct7.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
- arole was installed successfully
[WARNING]: Meta file /root/.ansible/roles/arole is empty. Skipping dependencies.
~~~
|
https://github.com/ansible/ansible/issues/72966
|
https://github.com/ansible/ansible/pull/73372
|
a9b5bebab34722ddbaed71944bc593857c15a712
|
1c83672532330d0d26b63f8a97ee703e7fc697df
| 2020-12-14T16:09:33Z |
python
| 2021-02-02T17:10:05Z |
changelogs/fragments/72966-allow-tilde-inside-galaxy-roles.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,966 |
Galaxy Fetch Fails When Tilde (~) Is Part of Dependency Filename
|
Bug Report
---------------
**SUMMARY**
When a file in a role has a tilde (~) in the name, the ansible-galaxy pull fails.
**COMPONENT NAME**
lib/ansible/cli/galaxy.py
**ANSIBLE VERSION**
~~~
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
~~~
**STEPS TO REPRODUCE**
When filename does not contain a tilde (~):
```
[root@0b37f81e0ac0 galaxy]# ansible-galaxy install -r requirements.yml -vvv
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8920yXDh66/tmp7Myv3l.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
- tester was installed successfully
[WARNING]: Meta file /root/.ansible/roles/tester is empty. Skipping dependencies.
```
When filename contains a tilde (~):
```
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8984bfuVM3/tmp_wXxbO.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
ERROR! Unexpected Exception, this is probably a bug: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 891, in execute_install
installed = role.install()
File "/usr/lib/python2.7/site-packages/ansible/galaxy/role.py", line 330, in install
role_tar_file.extract(member, self.path)
File "/usr/lib64/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib64/python2.7/tarfile.py", line 2160, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib64/python2.7/tarfile.py", line 2200, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
```
**EXPECTED RESULTS**
I would think roles should be able to accommodate tildes in the filenames.
See: ansible/galaxy#704
See: https://github.com/ansible/ansible/issues/71624
Here's the structure of a local Git clone which causes the problem:
~~~
(galaxy) machine@machine1:~/Downloads/galaxy/arole$ tree
.
├── files
│ ├── XXX-account.sh
│ └── group-XXXXXX~XXX-XXXXX-XX
├── meta
│ └── main.yml
└── README.md
2 directories, 4 files
~~~
Here's the ansible-galaxy output when a ~ is present in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-912do8enx7f/tmpp4ii2gwc.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
[WARNING]: - arole was NOT installed successfully: Could not update files in /root/.ansible/roles/arole: [Errno 21] Is a directory: '/root/.ansible/roles/arole/files'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
~~~
Here's the ansible-galaxy output when a ~ is not in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-927j9gf3cer/tmpsrn81ct7.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
- arole was installed successfully
[WARNING]: Meta file /root/.ansible/roles/arole is empty. Skipping dependencies.
~~~
|
https://github.com/ansible/ansible/issues/72966
|
https://github.com/ansible/ansible/pull/73372
|
a9b5bebab34722ddbaed71944bc593857c15a712
|
1c83672532330d0d26b63f8a97ee703e7fc697df
| 2020-12-14T16:09:33Z |
python
| 2021-02-02T17:10:05Z |
lib/ansible/galaxy/role.py
|
########################################################################
#
# (C) 2015, Brian Coca <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import datetime
import os
import tarfile
import tempfile
import yaml
from distutils.version import LooseVersion
from shutil import rmtree
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import open_url
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
display = Display()
class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = (os.path.join('meta', 'main.yml'), os.path.join('meta', 'main.yaml'))
META_INSTALL = os.path.join('meta', '.galaxy_install_info')
META_REQUIREMENTS = (os.path.join('meta', 'requirements.yml'), os.path.join('meta', 'requirements.yaml'))
ROLE_DIRS = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
def __init__(self, galaxy, api, name, src=None, version=None, scm=None, path=None):
self._metadata = None
self._requirements = None
self._install_info = None
self._validate_certs = not context.CLIARGS['ignore_certs']
display.debug('Validate TLS certificates: %s' % self._validate_certs)
self.galaxy = galaxy
self.api = api
self.name = name
self.version = version
self.src = src or name
self.scm = scm
self.paths = [os.path.join(x, self.name) for x in galaxy.roles_paths]
if path is not None:
if not path.endswith(os.path.join(os.path.sep, self.name)):
path = os.path.join(path, self.name)
else:
# Look for a meta/main.ya?ml inside the potential role dir in case
# the role name is the same as parent directory of the role.
#
# Example:
# ./roles/testing/testing/meta/main.yml
for meta_main in self.META_MAIN:
if os.path.exists(os.path.join(path, name, meta_main)):
path = os.path.join(path, self.name)
break
self.path = path
else:
# use the first path by default
self.path = os.path.join(galaxy.roles_paths[0], self.name)
def __repr__(self):
"""
Returns "rolename (version)" if version is not null
Returns "rolename" otherwise
"""
if self.version:
return "%s (%s)" % (self.name, self.version)
else:
return self.name
def __eq__(self, other):
return self.name == other.name
@property
def metadata(self):
"""
Returns role metadata
"""
if self._metadata is None:
for path in self.paths:
for meta_main in self.META_MAIN:
meta_path = os.path.join(path, meta_main)
if os.path.isfile(meta_path):
try:
with open(meta_path, 'r') as f:
self._metadata = yaml.safe_load(f)
except Exception:
display.vvvvv("Unable to load metadata for %s" % self.name)
return False
break
return self._metadata
@property
def install_info(self):
"""
Returns role install info
"""
if self._install_info is None:
info_path = os.path.join(self.path, self.META_INSTALL)
if os.path.isfile(info_path):
try:
f = open(info_path, 'r')
self._install_info = yaml.safe_load(f)
except Exception:
display.vvvvv("Unable to load Galaxy install info for %s" % self.name)
return False
finally:
f.close()
return self._install_info
@property
def _exists(self):
for path in self.paths:
if os.path.isdir(path):
return True
return False
def _write_galaxy_install_info(self):
"""
Writes a YAML-formatted file to the role's meta/ directory
(named .galaxy_install_info) which contains some information
we can use later for commands like 'list' and 'info'.
"""
info = dict(
version=self.version,
install_date=datetime.datetime.utcnow().strftime("%c"),
)
if not os.path.exists(os.path.join(self.path, 'meta')):
os.makedirs(os.path.join(self.path, 'meta'))
info_path = os.path.join(self.path, self.META_INSTALL)
with open(info_path, 'w+') as f:
try:
self._install_info = yaml.safe_dump(info, f)
except Exception:
return False
return True
def remove(self):
"""
Removes the specified role from the roles path.
There is a sanity check to make sure there's a meta/main.yml file at this
path so the user doesn't blow away random directories.
"""
if self.metadata:
try:
rmtree(self.path)
return True
except Exception:
pass
return False
def fetch(self, role_data):
"""
Downloads the archived role to a temp location based on role data
"""
if role_data:
# first grab the file and save it to a temp location
if "github_user" in role_data and "github_repo" in role_data:
archive_url = 'https://github.com/%s/%s/archive/%s.tar.gz' % (role_data["github_user"], role_data["github_repo"], self.version)
else:
archive_url = self.src
display.display("- downloading role from %s" % archive_url)
try:
url_file = open_url(archive_url, validate_certs=self._validate_certs, http_agent=user_agent())
temp_file = tempfile.NamedTemporaryFile(delete=False)
data = url_file.read()
while data:
temp_file.write(data)
data = url_file.read()
temp_file.close()
return temp_file.name
except Exception as e:
display.error(u"failed to download the file: %s" % to_text(e))
return False
def install(self):
if self.scm:
# create tar file from scm url
tmp_file = RoleRequirement.scm_archive_role(keep_scm_meta=context.CLIARGS['keep_scm_meta'], **self.spec)
elif self.src:
if os.path.isfile(self.src):
tmp_file = self.src
elif '://' in self.src:
role_data = self.src
tmp_file = self.fetch(role_data)
else:
role_data = self.api.lookup_role_by_name(self.src)
if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.api.api_server))
if role_data.get('role_type') == 'APP':
# Container Role
display.warning("%s is a Container App role, and should only be installed using Ansible "
"Container" % self.name)
role_versions = self.api.fetch_role_related('versions', role_data['id'])
if not self.version:
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
if len(role_versions) > 0:
loose_versions = [LooseVersion(a.get('name', None)) for a in role_versions]
try:
loose_versions.sort()
except TypeError:
raise AnsibleError(
'Unable to compare role versions (%s) to determine the most recent version due to incompatible version formats. '
'Please contact the role author to resolve versioning conflicts, or specify an explicit role version to '
'install.' % ', '.join([v.vstring for v in loose_versions])
)
self.version = to_text(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else:
self.version = 'master'
elif self.version != 'master':
if role_versions and to_text(self.version) not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version,
self.name,
role_versions))
# check if there's a source link for our role_version
for role_version in role_versions:
if role_version['name'] == self.version and 'source' in role_version:
self.src = role_version['source']
tmp_file = self.fetch(role_data)
else:
raise AnsibleError("No valid role data found")
if tmp_file:
display.debug("installing from %s" % tmp_file)
if not tarfile.is_tarfile(tmp_file):
raise AnsibleError("the downloaded file does not appear to be a valid tar archive.")
else:
role_tar_file = tarfile.open(tmp_file, "r")
# verify the role's meta file
meta_file = None
members = role_tar_file.getmembers()
# next find the metadata file
for member in members:
for meta_main in self.META_MAIN:
if meta_main in member.name:
# Look for parent of meta/main.yml
# Due to possibility of sub roles each containing meta/main.yml
# look for shortest length parent
meta_parent_dir = os.path.dirname(os.path.dirname(member.name))
if not meta_file:
archive_parent_dir = meta_parent_dir
meta_file = member
else:
if len(meta_parent_dir) < len(archive_parent_dir):
archive_parent_dir = meta_parent_dir
meta_file = member
if not meta_file:
raise AnsibleError("this role does not appear to have a meta/main.yml file.")
else:
try:
self._metadata = yaml.safe_load(role_tar_file.extractfile(meta_file))
except Exception:
raise AnsibleError("this role does not appear to have a valid meta/main.yml file.")
# we strip off any higher-level directories for all of the files contained within
# the tar file here. The default is 'github_repo-target'. Gerrit instances, on the other
# hand, does not have a parent directory at all.
installed = False
while not installed:
display.display("- extracting %s to %s" % (self.name, self.path))
try:
if os.path.exists(self.path):
if not os.path.isdir(self.path):
raise AnsibleError("the specified roles path exists and is not a directory.")
elif not context.CLIARGS.get("force", False):
raise AnsibleError("the specified role %s appears to already exist. Use --force to replace it." % self.name)
else:
# using --force, remove the old path
if not self.remove():
raise AnsibleError("%s doesn't appear to contain a role.\n please remove this directory manually if you really "
"want to put the role here." % self.path)
else:
os.makedirs(self.path)
# now we do the actual extraction to the path
for member in members:
# we only extract files, and remove any relative path
# bits that might be in the file for security purposes
# and drop any containing directory, as mentioned above
if member.isreg() or member.issym():
n_member_name = to_native(member.name)
n_archive_parent_dir = to_native(archive_parent_dir)
n_parts = n_member_name.replace(n_archive_parent_dir, "", 1).split(os.sep)
n_final_parts = []
for n_part in n_parts:
if n_part != '..' and '~' not in n_part and '$' not in n_part:
n_final_parts.append(n_part)
member.name = os.path.join(*n_final_parts)
role_tar_file.extract(member, to_native(self.path))
# write out the install info file for later use
self._write_galaxy_install_info()
installed = True
except OSError as e:
error = True
if e.errno == errno.EACCES and len(self.paths) > 1:
current = self.paths.index(self.path)
if len(self.paths) > current:
self.path = self.paths[current + 1]
error = False
if error:
raise AnsibleError("Could not update files in %s: %s" % (self.path, to_native(e)))
# return the parsed yaml metadata
display.display("- %s was installed successfully" % str(self))
if not (self.src and os.path.isfile(self.src)):
try:
os.unlink(tmp_file)
except (OSError, IOError) as e:
display.warning(u"Unable to remove tmp file (%s): %s" % (tmp_file, to_text(e)))
return True
return False
@property
def spec(self):
"""
Returns role spec info
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo'
}
"""
return dict(scm=self.scm, src=self.src, version=self.version, name=self.name)
@property
def requirements(self):
"""
Returns role requirements
"""
if self._requirements is None:
self._requirements = []
for meta_requirements in self.META_REQUIREMENTS:
meta_path = os.path.join(self.path, meta_requirements)
if os.path.isfile(meta_path):
try:
f = open(meta_path, 'r')
self._requirements = yaml.safe_load(f)
except Exception:
display.vvvvv("Unable to load requirements for %s" % self.name)
finally:
f.close()
break
return self._requirements
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,966 |
Galaxy Fetch Fails When Tilde (~) Is Part of Dependency Filename
|
Bug Report
---------------
**SUMMARY**
When a file in a role has a tilde (~) in the name, the ansible-galaxy pull fails.
**COMPONENT NAME**
lib/ansible/cli/galaxy.py
**ANSIBLE VERSION**
~~~
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
~~~
**STEPS TO REPRODUCE**
When filename does not contain a tilde (~):
```
[root@0b37f81e0ac0 galaxy]# ansible-galaxy install -r requirements.yml -vvv
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8920yXDh66/tmp7Myv3l.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
- tester was installed successfully
[WARNING]: Meta file /root/.ansible/roles/tester is empty. Skipping dependencies.
```
When filename contains a tilde (~):
```
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8984bfuVM3/tmp_wXxbO.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
ERROR! Unexpected Exception, this is probably a bug: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 891, in execute_install
installed = role.install()
File "/usr/lib/python2.7/site-packages/ansible/galaxy/role.py", line 330, in install
role_tar_file.extract(member, self.path)
File "/usr/lib64/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib64/python2.7/tarfile.py", line 2160, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib64/python2.7/tarfile.py", line 2200, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
```
**EXPECTED RESULTS**
I would think roles should be able to accommodate tildes in the filenames.
See: ansible/galaxy#704
See: https://github.com/ansible/ansible/issues/71624
Here's the structure of a local Git clone which causes the problem:
~~~
(galaxy) machine@machine1:~/Downloads/galaxy/arole$ tree
.
├── files
│ ├── XXX-account.sh
│ └── group-XXXXXX~XXX-XXXXX-XX
├── meta
│ └── main.yml
└── README.md
2 directories, 4 files
~~~
Here's the ansible-galaxy output when a ~ is present in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-912do8enx7f/tmpp4ii2gwc.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
[WARNING]: - arole was NOT installed successfully: Could not update files in /root/.ansible/roles/arole: [Errno 21] Is a directory: '/root/.ansible/roles/arole/files'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
~~~
Here's the ansible-galaxy output when a ~ is not in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-927j9gf3cer/tmpsrn81ct7.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
- arole was installed successfully
[WARNING]: Meta file /root/.ansible/roles/arole is empty. Skipping dependencies.
~~~
|
https://github.com/ansible/ansible/issues/72966
|
https://github.com/ansible/ansible/pull/73372
|
a9b5bebab34722ddbaed71944bc593857c15a712
|
1c83672532330d0d26b63f8a97ee703e7fc697df
| 2020-12-14T16:09:33Z |
python
| 2021-02-02T17:10:05Z |
test/integration/targets/ansible-galaxy-role/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,966 |
Galaxy Fetch Fails When Tilde (~) Is Part of Dependency Filename
|
Bug Report
---------------
**SUMMARY**
When a file in a role has a tilde (~) in the name, the ansible-galaxy pull fails.
**COMPONENT NAME**
lib/ansible/cli/galaxy.py
**ANSIBLE VERSION**
~~~
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
~~~
**STEPS TO REPRODUCE**
When filename does not contain a tilde (~):
```
[root@0b37f81e0ac0 galaxy]# ansible-galaxy install -r requirements.yml -vvv
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8920yXDh66/tmp7Myv3l.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
- tester was installed successfully
[WARNING]: Meta file /root/.ansible/roles/tester is empty. Skipping dependencies.
```
When filename contains a tilde (~):
```
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8984bfuVM3/tmp_wXxbO.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
ERROR! Unexpected Exception, this is probably a bug: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 891, in execute_install
installed = role.install()
File "/usr/lib/python2.7/site-packages/ansible/galaxy/role.py", line 330, in install
role_tar_file.extract(member, self.path)
File "/usr/lib64/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib64/python2.7/tarfile.py", line 2160, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib64/python2.7/tarfile.py", line 2200, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
```
**EXPECTED RESULTS**
I would think roles should be able to accommodate tildes in the filenames.
See: ansible/galaxy#704
See: https://github.com/ansible/ansible/issues/71624
Here's the structure of a local Git clone which causes the problem:
~~~
(galaxy) machine@machine1:~/Downloads/galaxy/arole$ tree
.
├── files
│ ├── XXX-account.sh
│ └── group-XXXXXX~XXX-XXXXX-XX
├── meta
│ └── main.yml
└── README.md
2 directories, 4 files
~~~
Here's the ansible-galaxy output when a ~ is present in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-912do8enx7f/tmpp4ii2gwc.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
[WARNING]: - arole was NOT installed successfully: Could not update files in /root/.ansible/roles/arole: [Errno 21] Is a directory: '/root/.ansible/roles/arole/files'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
~~~
Here's the ansible-galaxy output when a ~ is not in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-927j9gf3cer/tmpsrn81ct7.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
- arole was installed successfully
[WARNING]: Meta file /root/.ansible/roles/arole is empty. Skipping dependencies.
~~~
|
https://github.com/ansible/ansible/issues/72966
|
https://github.com/ansible/ansible/pull/73372
|
a9b5bebab34722ddbaed71944bc593857c15a712
|
1c83672532330d0d26b63f8a97ee703e7fc697df
| 2020-12-14T16:09:33Z |
python
| 2021-02-02T17:10:05Z |
test/integration/targets/ansible-galaxy-role/meta/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,966 |
Galaxy Fetch Fails When Tilde (~) Is Part of Dependency Filename
|
Bug Report
---------------
**SUMMARY**
When a file in a role has a tilde (~) in the name, the ansible-galaxy pull fails.
**COMPONENT NAME**
lib/ansible/cli/galaxy.py
**ANSIBLE VERSION**
~~~
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
~~~
**STEPS TO REPRODUCE**
When filename does not contain a tilde (~):
```
[root@0b37f81e0ac0 galaxy]# ansible-galaxy install -r requirements.yml -vvv
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8920yXDh66/tmp7Myv3l.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
- tester was installed successfully
[WARNING]: Meta file /root/.ansible/roles/tester is empty. Skipping dependencies.
```
When filename contains a tilde (~):
```
ansible-galaxy 2.9.7
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-galaxy
python version = 2.7.5 (default, Apr 2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at 'requirements.yml'
found role {'scm': 'git', 'src': 'http://git.example.com/ansible/tester.git', 'version': '', 'name': 'tester'} in yaml file
Processing role tester
archiving ['/usr/bin/git', 'archive', '--prefix=tester/', u'--output=/root/.ansible/tmp/ansible-local-8984bfuVM3/tmp_wXxbO.tar', 'HEAD']
- extracting tester to /root/.ansible/roles/tester
ERROR! Unexpected Exception, this is probably a bug: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-galaxy", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 375, in run
context.CLIARGS['func']()
File "/usr/lib/python2.7/site-packages/ansible/cli/galaxy.py", line 891, in execute_install
installed = role.install()
File "/usr/lib/python2.7/site-packages/ansible/galaxy/role.py", line 330, in install
role_tar_file.extract(member, self.path)
File "/usr/lib64/python2.7/tarfile.py", line 2084, in extract
self._extract_member(tarinfo, os.path.join(path, tarinfo.name))
File "/usr/lib64/python2.7/tarfile.py", line 2160, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib64/python2.7/tarfile.py", line 2200, in makefile
with bltn_open(targetpath, "wb") as target:
IOError: [Errno 21] Is a directory: u'/root/.ansible/roles/tester/files'
```
**EXPECTED RESULTS**
I would think roles should be able to accommodate tildes in the filenames.
See: ansible/galaxy#704
See: https://github.com/ansible/ansible/issues/71624
Here's the structure of a local Git clone which causes the problem:
~~~
(galaxy) machine@machine1:~/Downloads/galaxy/arole$ tree
.
├── files
│ ├── XXX-account.sh
│ └── group-XXXXXX~XXX-XXXXX-XX
├── meta
│ └── main.yml
└── README.md
2 directories, 4 files
~~~
Here's the ansible-galaxy output when a ~ is present in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-912do8enx7f/tmpp4ii2gwc.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
[WARNING]: - arole was NOT installed successfully: Could not update files in /root/.ansible/roles/arole: [Errno 21] Is a directory: '/root/.ansible/roles/arole/files'
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
~~~
Here's the ansible-galaxy output when a ~ is not in a dependency filename:
~~~
[WARNING]: You are running the development version of Ansible. You should only run Ansible from "devel" if you are modifying the Ansible engine, or trying out features under development. This is a rapidly
changing source of code and can become unstable at any point.
ansible-galaxy 2.11.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/lib/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/ansible/bin/ansible-galaxy
python version = 3.6.9 (default, Nov 11 2019, 11:24:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
Initial connection to galaxy_server: https://galaxy.ansible.com
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Starting galaxy role install process
Processing role arole
archiving ['/usr/bin/git', 'archive', '--prefix=arole/', '--output=/root/.ansible/tmp/ansible-local-927j9gf3cer/tmpsrn81ct7.tar', 'HEAD']
- extracting arole to /root/.ansible/roles/arole
- arole was installed successfully
[WARNING]: Meta file /root/.ansible/roles/arole is empty. Skipping dependencies.
~~~
|
https://github.com/ansible/ansible/issues/72966
|
https://github.com/ansible/ansible/pull/73372
|
a9b5bebab34722ddbaed71944bc593857c15a712
|
1c83672532330d0d26b63f8a97ee703e7fc697df
| 2020-12-14T16:09:33Z |
python
| 2021-02-02T17:10:05Z |
test/integration/targets/ansible-galaxy-role/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,084 |
RHEL 8.3 Edge identifies wrong package manager
|
##### SUMMARY
I tried automating stuff on RHEL 8.3 Edge OS. It is the kind of like Fedora CoreOS or Fedora-IoT. It is to use rpm-ostree as package manager, not dnf or yum like regular RHEL.
However it identifies: ```"ansible_pkg_mgr": "dnf"```
Where as it should be ```atomic_container``` like in Fedora CoreOS deriatives.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible_facts.pkg_mgr
##### ANSIBLE VERSION
This is the latest ansible on my rhel 8.3 host:
```paste below
ansible --version
ansible 2.9.16
config file = /home/itengval/src/ansible-test/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/home/itengval/src/ansible-test/ansible.cfg) = True
DEFAULT_CALLBACK_WHITELIST(/home/itengval/src/ansible-test/ansible.cfg) = ['timer']
DEFAULT_ROLES_PATH(/home/itengval/src/ansible-test/ansible.cfg) = ['/home/itengval/src/ansible-test/roles']
HOST_KEY_CHECKING(/home/itengval/src/ansible-test/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Target os is a RHEL 8.3 Edge VM I created, with minimal set of packages:
```
name = "cockpit-podman"
name = "podman"
name = "openssh-server"
name = "cockpit"
name = "cockpit-packagekit"
name = "cockpit-pcp"
name = "cockpit-system"
name = "cockpit-storaged"
```
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ensure firewalld is installed
tags: firewall
package: name=firewalld state=present
when: ansible_pkg_mgr != "atomic_container"
- name: ensure firewalld is installed (on fedora-iot)
tags: firewall
command: >-
rpm-ostree install --idempotent --unchanged-exit-77
--allow-inactive firewalld
register: ostree
failed_when: not ( ostree.rc == 77 or ostree.rc == 0 )
changed_when: ostree.rc != 77
when: ansible_pkg_mgr == "atomic_container"
```
##### EXPECTED RESULTS
It should go like this:
```
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed (on fedora-iot)] **********************
changed: [edge]
```
##### ACTUAL RESULTS
```paste below
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed] **************************************
fatal: [edge]: FAILED! => {"changed": false, "cmd": "dnf install -y python3-dnf", "msg": "[Errno 2] No such file or directory: b'dnf': b'dnf'", "rc": 2}
```
It works if I start the playbook with fixed package manager:
```
ansible-playbook -i edge, -u cloud-user -b -e container_state=running -e ansible_pkg_mgr=atomic_container run-container-grafana-podman.yml
```
|
https://github.com/ansible/ansible/issues/73084
|
https://github.com/ansible/ansible/pull/73445
|
1c83672532330d0d26b63f8a97ee703e7fc697df
|
9a9272305a7b09f84861c7061f57945ae9ad7090
| 2020-12-30T13:22:11Z |
python
| 2021-02-02T20:09:30Z |
changelogs/fragments/73084-rhel-for-edge-pkg_mgr-fact-fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,084 |
RHEL 8.3 Edge identifies wrong package manager
|
##### SUMMARY
I tried automating stuff on RHEL 8.3 Edge OS. It is the kind of like Fedora CoreOS or Fedora-IoT. It is to use rpm-ostree as package manager, not dnf or yum like regular RHEL.
However it identifies: ```"ansible_pkg_mgr": "dnf"```
Where as it should be ```atomic_container``` like in Fedora CoreOS deriatives.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible_facts.pkg_mgr
##### ANSIBLE VERSION
This is the latest ansible on my rhel 8.3 host:
```paste below
ansible --version
ansible 2.9.16
config file = /home/itengval/src/ansible-test/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/home/itengval/src/ansible-test/ansible.cfg) = True
DEFAULT_CALLBACK_WHITELIST(/home/itengval/src/ansible-test/ansible.cfg) = ['timer']
DEFAULT_ROLES_PATH(/home/itengval/src/ansible-test/ansible.cfg) = ['/home/itengval/src/ansible-test/roles']
HOST_KEY_CHECKING(/home/itengval/src/ansible-test/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Target os is a RHEL 8.3 Edge VM I created, with minimal set of packages:
```
name = "cockpit-podman"
name = "podman"
name = "openssh-server"
name = "cockpit"
name = "cockpit-packagekit"
name = "cockpit-pcp"
name = "cockpit-system"
name = "cockpit-storaged"
```
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ensure firewalld is installed
tags: firewall
package: name=firewalld state=present
when: ansible_pkg_mgr != "atomic_container"
- name: ensure firewalld is installed (on fedora-iot)
tags: firewall
command: >-
rpm-ostree install --idempotent --unchanged-exit-77
--allow-inactive firewalld
register: ostree
failed_when: not ( ostree.rc == 77 or ostree.rc == 0 )
changed_when: ostree.rc != 77
when: ansible_pkg_mgr == "atomic_container"
```
##### EXPECTED RESULTS
It should go like this:
```
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed (on fedora-iot)] **********************
changed: [edge]
```
##### ACTUAL RESULTS
```paste below
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed] **************************************
fatal: [edge]: FAILED! => {"changed": false, "cmd": "dnf install -y python3-dnf", "msg": "[Errno 2] No such file or directory: b'dnf': b'dnf'", "rc": 2}
```
It works if I start the playbook with fixed package manager:
```
ansible-playbook -i edge, -u cloud-user -b -e container_state=running -e ansible_pkg_mgr=atomic_container run-container-grafana-podman.yml
```
|
https://github.com/ansible/ansible/issues/73084
|
https://github.com/ansible/ansible/pull/73445
|
1c83672532330d0d26b63f8a97ee703e7fc697df
|
9a9272305a7b09f84861c7061f57945ae9ad7090
| 2020-12-30T13:22:11Z |
python
| 2021-02-02T20:09:30Z |
lib/ansible/module_utils/facts/system/pkg_mgr.py
|
# Collect facts related to the system package manager
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import subprocess
from ansible.module_utils.facts.collector import BaseFactCollector
# A list of dicts. If there is a platform with more than one
# package manager, put the preferred one last. If there is an
# ansible module, use that as the value for the 'name' key.
PKG_MGRS = [{'path': '/usr/bin/yum', 'name': 'yum'},
{'path': '/usr/bin/dnf', 'name': 'dnf'},
{'path': '/usr/bin/apt-get', 'name': 'apt'},
{'path': '/usr/bin/zypper', 'name': 'zypper'},
{'path': '/usr/sbin/urpmi', 'name': 'urpmi'},
{'path': '/usr/bin/pacman', 'name': 'pacman'},
{'path': '/bin/opkg', 'name': 'opkg'},
{'path': '/usr/pkg/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/local/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/tools/bin/pkgin', 'name': 'pkgin'},
{'path': '/opt/local/bin/port', 'name': 'macports'},
{'path': '/usr/local/bin/brew', 'name': 'homebrew'},
{'path': '/sbin/apk', 'name': 'apk'},
{'path': '/usr/sbin/pkg', 'name': 'pkgng'},
{'path': '/usr/sbin/swlist', 'name': 'swdepot'},
{'path': '/usr/bin/emerge', 'name': 'portage'},
{'path': '/usr/sbin/pkgadd', 'name': 'svr4pkg'},
{'path': '/usr/bin/pkg', 'name': 'pkg5'},
{'path': '/usr/bin/xbps-install', 'name': 'xbps'},
{'path': '/usr/local/sbin/pkg', 'name': 'pkgng'},
{'path': '/usr/bin/swupd', 'name': 'swupd'},
{'path': '/usr/sbin/sorcery', 'name': 'sorcery'},
{'path': '/usr/bin/rpm-ostree', 'name': 'atomic_container'},
{'path': '/usr/bin/installp', 'name': 'installp'},
{'path': '/QOpenSys/pkgs/bin/yum', 'name': 'yum'},
]
class OpenBSDPkgMgrFactCollector(BaseFactCollector):
name = 'pkg_mgr'
_fact_ids = set()
_platform = 'OpenBSD'
def collect(self, module=None, collected_facts=None):
facts_dict = {}
facts_dict['pkg_mgr'] = 'openbsd_pkg'
return facts_dict
# the fact ends up being 'pkg_mgr' so stick with that naming/spelling
class PkgMgrFactCollector(BaseFactCollector):
name = 'pkg_mgr'
_fact_ids = set()
_platform = 'Generic'
required_facts = set(['distribution'])
def _check_rh_versions(self, pkg_mgr_name, collected_facts):
if collected_facts['ansible_distribution'] == 'Fedora':
if os.path.exists('/run/ostree-booted'):
return "atomic_container"
try:
if int(collected_facts['ansible_distribution_major_version']) < 23:
for yum in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'yum']:
if os.path.exists(yum['path']):
pkg_mgr_name = 'yum'
break
else:
for dnf in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'dnf']:
if os.path.exists(dnf['path']):
pkg_mgr_name = 'dnf'
break
except ValueError:
# If there's some new magical Fedora version in the future,
# just default to dnf
pkg_mgr_name = 'dnf'
elif collected_facts['ansible_distribution'] == 'Amazon':
pkg_mgr_name = 'yum'
else:
# If it's not one of the above and it's Red Hat family of distros, assume
# RHEL or a clone. For versions of RHEL < 8 that Ansible supports, the
# vendor supported official package manager is 'yum' and in RHEL 8+
# (as far as we know at the time of this writing) it is 'dnf'.
# If anyone wants to force a non-official package manager then they
# can define a provider to either the package or yum action plugins.
if int(collected_facts['ansible_distribution_major_version']) < 8:
pkg_mgr_name = 'yum'
else:
pkg_mgr_name = 'dnf'
return pkg_mgr_name
def _check_apt_flavor(self, pkg_mgr_name):
# Check if '/usr/bin/apt' is APT-RPM or an ordinary (dpkg-based) APT.
# There's rpm package on Debian, so checking if /usr/bin/rpm exists
# is not enough. Instead ask RPM if /usr/bin/apt-get belongs to some
# RPM package.
rpm_query = '/usr/bin/rpm -q --whatprovides /usr/bin/apt-get'.split()
if os.path.exists('/usr/bin/rpm'):
with open(os.devnull, 'w') as null:
try:
subprocess.check_call(rpm_query, stdout=null, stderr=null)
pkg_mgr_name = 'apt_rpm'
except subprocess.CalledProcessError:
# No apt-get in RPM database. Looks like Debian/Ubuntu
# with rpm package installed
pkg_mgr_name = 'apt'
return pkg_mgr_name
def collect(self, module=None, collected_facts=None):
facts_dict = {}
collected_facts = collected_facts or {}
pkg_mgr_name = 'unknown'
for pkg in PKG_MGRS:
if os.path.exists(pkg['path']):
pkg_mgr_name = pkg['name']
# Handle distro family defaults when more than one package manager is
# installed or available to the distro, the ansible_fact entry should be
# the default package manager officially supported by the distro.
if collected_facts['ansible_os_family'] == "RedHat":
pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts)
elif collected_facts['ansible_os_family'] == 'Debian' and pkg_mgr_name != 'apt':
# It's possible to install yum, dnf, zypper, rpm, etc inside of
# Debian. Doing so does not mean the system wants to use them.
pkg_mgr_name = 'apt'
elif collected_facts['ansible_os_family'] == 'Altlinux':
if pkg_mgr_name == 'apt':
pkg_mgr_name = 'apt_rpm'
# Check if /usr/bin/apt-get is ordinary (dpkg-based) APT or APT-RPM
if pkg_mgr_name == 'apt':
pkg_mgr_name = self._check_apt_flavor(pkg_mgr_name)
facts_dict['pkg_mgr'] = pkg_mgr_name
return facts_dict
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,084 |
RHEL 8.3 Edge identifies wrong package manager
|
##### SUMMARY
I tried automating stuff on RHEL 8.3 Edge OS. It is the kind of like Fedora CoreOS or Fedora-IoT. It is to use rpm-ostree as package manager, not dnf or yum like regular RHEL.
However it identifies: ```"ansible_pkg_mgr": "dnf"```
Where as it should be ```atomic_container``` like in Fedora CoreOS deriatives.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible_facts.pkg_mgr
##### ANSIBLE VERSION
This is the latest ansible on my rhel 8.3 host:
```paste below
ansible --version
ansible 2.9.16
config file = /home/itengval/src/ansible-test/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /bin/ansible
python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
```
##### CONFIGURATION
```paste below
ANSIBLE_PIPELINING(/home/itengval/src/ansible-test/ansible.cfg) = True
DEFAULT_CALLBACK_WHITELIST(/home/itengval/src/ansible-test/ansible.cfg) = ['timer']
DEFAULT_ROLES_PATH(/home/itengval/src/ansible-test/ansible.cfg) = ['/home/itengval/src/ansible-test/roles']
HOST_KEY_CHECKING(/home/itengval/src/ansible-test/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Target os is a RHEL 8.3 Edge VM I created, with minimal set of packages:
```
name = "cockpit-podman"
name = "podman"
name = "openssh-server"
name = "cockpit"
name = "cockpit-packagekit"
name = "cockpit-pcp"
name = "cockpit-system"
name = "cockpit-storaged"
```
##### STEPS TO REPRODUCE
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: ensure firewalld is installed
tags: firewall
package: name=firewalld state=present
when: ansible_pkg_mgr != "atomic_container"
- name: ensure firewalld is installed (on fedora-iot)
tags: firewall
command: >-
rpm-ostree install --idempotent --unchanged-exit-77
--allow-inactive firewalld
register: ostree
failed_when: not ( ostree.rc == 77 or ostree.rc == 0 )
changed_when: ostree.rc != 77
when: ansible_pkg_mgr == "atomic_container"
```
##### EXPECTED RESULTS
It should go like this:
```
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed (on fedora-iot)] **********************
changed: [edge]
```
##### ACTUAL RESULTS
```paste below
TASK [ikke_t.podman_container_systemd : ensure firewalld is installed] **************************************
fatal: [edge]: FAILED! => {"changed": false, "cmd": "dnf install -y python3-dnf", "msg": "[Errno 2] No such file or directory: b'dnf': b'dnf'", "rc": 2}
```
It works if I start the playbook with fixed package manager:
```
ansible-playbook -i edge, -u cloud-user -b -e container_state=running -e ansible_pkg_mgr=atomic_container run-container-grafana-podman.yml
```
|
https://github.com/ansible/ansible/issues/73084
|
https://github.com/ansible/ansible/pull/73445
|
1c83672532330d0d26b63f8a97ee703e7fc697df
|
9a9272305a7b09f84861c7061f57945ae9ad7090
| 2020-12-30T13:22:11Z |
python
| 2021-02-02T20:09:30Z |
test/units/module_utils/facts/test_ansible_collector.py
|
# -*- coding: utf-8 -*-
#
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# for testing
from units.compat import unittest
from units.compat.mock import Mock, patch
from ansible.module_utils.facts import collector
from ansible.module_utils.facts import ansible_collector
from ansible.module_utils.facts import namespace
from ansible.module_utils.facts.other.facter import FacterFactCollector
from ansible.module_utils.facts.other.ohai import OhaiFactCollector
from ansible.module_utils.facts.system.apparmor import ApparmorFactCollector
from ansible.module_utils.facts.system.caps import SystemCapabilitiesFactCollector
from ansible.module_utils.facts.system.date_time import DateTimeFactCollector
from ansible.module_utils.facts.system.env import EnvFactCollector
from ansible.module_utils.facts.system.distribution import DistributionFactCollector
from ansible.module_utils.facts.system.dns import DnsFactCollector
from ansible.module_utils.facts.system.fips import FipsFactCollector
from ansible.module_utils.facts.system.local import LocalFactCollector
from ansible.module_utils.facts.system.lsb import LSBFactCollector
from ansible.module_utils.facts.system.pkg_mgr import PkgMgrFactCollector, OpenBSDPkgMgrFactCollector
from ansible.module_utils.facts.system.platform import PlatformFactCollector
from ansible.module_utils.facts.system.python import PythonFactCollector
from ansible.module_utils.facts.system.selinux import SelinuxFactCollector
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils.facts.system.user import UserFactCollector
# from ansible.module_utils.facts.hardware.base import HardwareCollector
from ansible.module_utils.facts.network.base import NetworkCollector
from ansible.module_utils.facts.virtual.base import VirtualCollector
ALL_COLLECTOR_CLASSES = \
[PlatformFactCollector,
DistributionFactCollector,
SelinuxFactCollector,
ApparmorFactCollector,
SystemCapabilitiesFactCollector,
FipsFactCollector,
PkgMgrFactCollector,
OpenBSDPkgMgrFactCollector,
ServiceMgrFactCollector,
LSBFactCollector,
DateTimeFactCollector,
UserFactCollector,
LocalFactCollector,
EnvFactCollector,
DnsFactCollector,
PythonFactCollector,
# FIXME: re-enable when hardware doesnt Hardware() doesnt munge self.facts
# HardwareCollector
NetworkCollector,
VirtualCollector,
OhaiFactCollector,
FacterFactCollector]
def mock_module(gather_subset=None,
filter=None):
if gather_subset is None:
gather_subset = ['all', '!facter', '!ohai']
if filter is None:
filter = '*'
mock_module = Mock()
mock_module.params = {'gather_subset': gather_subset,
'gather_timeout': 5,
'filter': filter}
mock_module.get_bin_path = Mock(return_value=None)
return mock_module
def _collectors(module,
all_collector_classes=None,
minimal_gather_subset=None):
gather_subset = module.params.get('gather_subset')
if all_collector_classes is None:
all_collector_classes = ALL_COLLECTOR_CLASSES
if minimal_gather_subset is None:
minimal_gather_subset = frozenset([])
collector_classes = \
collector.collector_classes_from_gather_subset(all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset,
gather_subset=gather_subset)
collectors = []
for collector_class in collector_classes:
collector_obj = collector_class()
collectors.append(collector_obj)
# Add a collector that knows what gather_subset we used so it it can provide a fact
collector_meta_data_collector = \
ansible_collector.CollectorMetaDataCollector(gather_subset=gather_subset,
module_setup=True)
collectors.append(collector_meta_data_collector)
return collectors
ns = namespace.PrefixFactNamespace('ansible_facts', 'ansible_')
# FIXME: this is brute force, but hopefully enough to get some refactoring to make facts testable
class TestInPlace(unittest.TestCase):
def _mock_module(self, gather_subset=None):
return mock_module(gather_subset=gather_subset)
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
def test(self):
gather_subset = ['all']
mock_module = self._mock_module(gather_subset=gather_subset)
all_collector_classes = [EnvFactCollector]
collectors = self._collectors(mock_module,
all_collector_classes=all_collector_classes)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect(module=mock_module)
self.assertIsInstance(res, dict)
self.assertIn('env', res)
self.assertIn('gather_subset', res)
self.assertEqual(res['gather_subset'], ['all'])
def test1(self):
gather_subset = ['all']
mock_module = self._mock_module(gather_subset=gather_subset)
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect(module=mock_module)
self.assertIsInstance(res, dict)
# just assert it's not almost empty
# with run_command and get_file_content mock, many facts are empty, like network
self.assertGreater(len(res), 20)
def test_empty_all_collector_classes(self):
mock_module = self._mock_module()
all_collector_classes = []
collectors = self._collectors(mock_module,
all_collector_classes=all_collector_classes)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
res = fact_collector.collect()
self.assertIsInstance(res, dict)
# just assert it's not almost empty
self.assertLess(len(res), 3)
# def test_facts_class(self):
# mock_module = self._mock_module()
# Facts(mock_module)
# def test_facts_class_load_on_init_false(self):
# mock_module = self._mock_module()
# Facts(mock_module, load_on_init=False)
# # FIXME: assert something
class TestCollectedFacts(unittest.TestCase):
gather_subset = ['all', '!facter', '!ohai']
min_fact_count = 30
max_fact_count = 1000
# TODO: add ansible_cmdline, ansible_*_pubkey* back when TempFactCollector goes away
expected_facts = ['date_time',
'user_id', 'distribution',
'gather_subset', 'module_setup',
'env']
not_expected_facts = ['facter', 'ohai']
collected_facts = {}
def _mock_module(self, gather_subset=None):
return mock_module(gather_subset=self.gather_subset)
@patch('platform.system', return_value='Linux')
@patch('ansible.module_utils.facts.system.service_mgr.get_file_content', return_value='systemd')
def setUp(self, mock_gfc, mock_ps):
mock_module = self._mock_module()
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
self.facts = fact_collector.collect(module=mock_module,
collected_facts=self.collected_facts)
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
def test_basics(self):
self._assert_basics(self.facts)
def test_expected_facts(self):
self._assert_expected_facts(self.facts)
def test_not_expected_facts(self):
self._assert_not_expected_facts(self.facts)
def _assert_basics(self, facts):
self.assertIsInstance(facts, dict)
# just assert it's not almost empty
self.assertGreaterEqual(len(facts), self.min_fact_count)
# and that is not huge number of keys
self.assertLess(len(facts), self.max_fact_count)
# everything starts with ansible_ namespace
def _assert_ansible_namespace(self, facts):
# FIXME: kluge for non-namespace fact
facts.pop('module_setup', None)
facts.pop('gather_subset', None)
for fact_key in facts:
self.assertTrue(fact_key.startswith('ansible_'),
'The fact name "%s" does not startwith "ansible_"' % fact_key)
def _assert_expected_facts(self, facts):
facts_keys = sorted(facts.keys())
for expected_fact in self.expected_facts:
self.assertIn(expected_fact, facts_keys)
def _assert_not_expected_facts(self, facts):
facts_keys = sorted(facts.keys())
for not_expected_fact in self.not_expected_facts:
self.assertNotIn(not_expected_fact, facts_keys)
class ProvidesOtherFactCollector(collector.BaseFactCollector):
name = 'provides_something'
_fact_ids = set(['needed_fact'])
def collect(self, module=None, collected_facts=None):
return {'needed_fact': 'THE_NEEDED_FACT_VALUE'}
class RequiresOtherFactCollector(collector.BaseFactCollector):
name = 'requires_something'
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
fact_dict = {}
fact_dict['needed_fact'] = collected_facts['needed_fact']
fact_dict['compound_fact'] = "compound-%s" % collected_facts['needed_fact']
return fact_dict
class ConCatFactCollector(collector.BaseFactCollector):
name = 'concat_collected'
def collect(self, module=None, collected_facts=None):
collected_facts = collected_facts or {}
fact_dict = {}
con_cat_list = []
for key, value in collected_facts.items():
con_cat_list.append(value)
fact_dict['concat_fact'] = '-'.join(con_cat_list)
return fact_dict
class TestCollectorDepsWithFilter(unittest.TestCase):
gather_subset = ['all', '!facter', '!ohai']
def _mock_module(self, gather_subset=None, filter=None):
return mock_module(gather_subset=self.gather_subset,
filter=filter)
def setUp(self):
self.mock_module = self._mock_module()
self.collectors = self._collectors(mock_module)
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return [ProvidesOtherFactCollector(),
RequiresOtherFactCollector()]
def test_no_filter(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'])
facts_dict = self._collect(_mock_module)
expected = {'needed_fact': 'THE_NEEDED_FACT_VALUE',
'compound_fact': 'compound-THE_NEEDED_FACT_VALUE'}
self.assertEqual(expected, facts_dict)
def test_with_filter_on_compound_fact(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'],
filter='compound_fact')
facts_dict = self._collect(_mock_module)
expected = {'compound_fact': 'compound-THE_NEEDED_FACT_VALUE'}
self.assertEqual(expected, facts_dict)
def test_with_filter_on_needed_fact(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'],
filter='needed_fact')
facts_dict = self._collect(_mock_module)
expected = {'needed_fact': 'THE_NEEDED_FACT_VALUE'}
self.assertEqual(expected, facts_dict)
def test_with_filter_on_compound_gather_compound(self):
_mock_module = mock_module(gather_subset=['!all', '!any', 'compound_fact'],
filter='compound_fact')
facts_dict = self._collect(_mock_module)
expected = {'compound_fact': 'compound-THE_NEEDED_FACT_VALUE'}
self.assertEqual(expected, facts_dict)
def test_with_filter_no_match(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'],
filter='ansible_this_doesnt_exist')
facts_dict = self._collect(_mock_module)
expected = {}
self.assertEqual(expected, facts_dict)
def test_concat_collector(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'])
_collectors = self._collectors(_mock_module)
_collectors.append(ConCatFactCollector())
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=_collectors,
namespace=ns,
filter_spec=_mock_module.params['filter'])
collected_facts = {}
facts_dict = fact_collector.collect(module=_mock_module,
collected_facts=collected_facts)
self.assertIn('concat_fact', facts_dict)
self.assertTrue('THE_NEEDED_FACT_VALUE' in facts_dict['concat_fact'])
def test_concat_collector_with_filter_on_concat(self):
_mock_module = mock_module(gather_subset=['all', '!facter', '!ohai'],
filter='concat_fact')
_collectors = self._collectors(_mock_module)
_collectors.append(ConCatFactCollector())
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=_collectors,
namespace=ns,
filter_spec=_mock_module.params['filter'])
collected_facts = {}
facts_dict = fact_collector.collect(module=_mock_module,
collected_facts=collected_facts)
self.assertIn('concat_fact', facts_dict)
self.assertTrue('THE_NEEDED_FACT_VALUE' in facts_dict['concat_fact'])
self.assertTrue('compound' in facts_dict['concat_fact'])
def _collect(self, _mock_module, collected_facts=None):
_collectors = self._collectors(_mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=_collectors,
namespace=ns,
filter_spec=_mock_module.params['filter'])
facts_dict = fact_collector.collect(module=_mock_module,
collected_facts=collected_facts)
return facts_dict
class ExceptionThrowingCollector(collector.BaseFactCollector):
def collect(self, module=None, collected_facts=None):
raise Exception('A collector failed')
class TestExceptionCollectedFacts(TestCollectedFacts):
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
collectors = _collectors(module=module,
all_collector_classes=all_collector_classes,
minimal_gather_subset=minimal_gather_subset)
c = [ExceptionThrowingCollector()] + collectors
return c
class TestOnlyExceptionCollector(TestCollectedFacts):
expected_facts = []
min_fact_count = 0
def _collectors(self, module,
all_collector_classes=None,
minimal_gather_subset=None):
return [ExceptionThrowingCollector()]
class TestMinimalCollectedFacts(TestCollectedFacts):
gather_subset = ['!all']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestFacterCollectedFacts(TestCollectedFacts):
gather_subset = ['!all', 'facter']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestOhaiCollectedFacts(TestCollectedFacts):
gather_subset = ['!all', 'ohai']
min_fact_count = 1
max_fact_count = 10
expected_facts = ['gather_subset',
'module_setup']
not_expected_facts = ['lsb']
class TestPkgMgrFacts(TestCollectedFacts):
gather_subset = ['pkg_mgr']
min_fact_count = 1
max_fact_count = 20
expected_facts = ['gather_subset',
'module_setup',
'pkg_mgr']
collected_facts = {
"ansible_distribution": "Fedora",
"ansible_distribution_major_version": "28",
"ansible_os_family": "RedHat"
}
class TestOpenBSDPkgMgrFacts(TestPkgMgrFacts):
def test_is_openbsd_pkg(self):
self.assertIn('pkg_mgr', self.facts)
self.assertEqual(self.facts['pkg_mgr'], 'openbsd_pkg')
def setUp(self):
self.patcher = patch('platform.system')
mock_platform = self.patcher.start()
mock_platform.return_value = 'OpenBSD'
mock_module = self._mock_module()
collectors = self._collectors(mock_module)
fact_collector = \
ansible_collector.AnsibleFactCollector(collectors=collectors,
namespace=ns)
self.facts = fact_collector.collect(module=mock_module)
def tearDown(self):
self.patcher.stop()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
changelogs/fragments/lineinfile-add-search_string-parameter-for-non-regexp-searching.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
lib/ansible/modules/lineinfile.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# Copyright: (c) 2014, Ahti Kitsik <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: lineinfile
short_description: Manage lines in text files
description:
- This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression.
- This is primarily useful when you want to change a single line in a file only.
- See the M(ansible.builtin.replace) module if you want to change multiple, similar lines
or check M(ansible.builtin.blockinfile) if you want to insert/update/remove a block of lines in a file.
For other cases, see the M(ansible.builtin.copy) or M(ansible.builtin.template) modules.
version_added: "0.7"
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: true
aliases: [ dest, destfile, name ]
regexp:
description:
- The regular expression to look for in every line of the file.
- For C(state=present), the pattern to replace if found. Only the last line found will be replaced.
- For C(state=absent), the pattern of the line(s) to remove.
- If the regular expression is not matched, the line will be
added to the file in keeping with C(insertbefore) or C(insertafter)
settings.
- When modifying a line the regexp should typically match both the initial state of
the line as well as its state after replacement by C(line) to ensure idempotence.
- Uses Python regular expressions. See U(https://docs.python.org/3/library/re.html).
type: str
aliases: [ regex ]
version_added: '1.7'
state:
description:
- Whether the line should be there or not.
type: str
choices: [ absent, present ]
default: present
line:
description:
- The line to insert/replace into the file.
- Required for C(state=present).
- If C(backrefs) is set, may contain backreferences that will get
expanded with the C(regexp) capture groups if the regexp matches.
type: str
aliases: [ value ]
backrefs:
description:
- Used with C(state=present).
- If set, C(line) can contain backreferences (both positional and named)
that will get populated if the C(regexp) matches.
- This parameter changes the operation of the module slightly;
C(insertbefore) and C(insertafter) will be ignored, and if the C(regexp)
does not match anywhere in the file, the file will be left unchanged.
- If the C(regexp) does match, the last matching line will be replaced by
the expanded line parameter.
type: bool
default: no
version_added: "1.1"
insertafter:
description:
- Used with C(state=present).
- If specified, the line will be inserted after the last match of specified regular expression.
- If the first match is required, use(firstmatch=yes).
- A special value is available; C(EOF) for inserting the line at the end of the file.
- If specified regular expression has no matches, EOF will be used instead.
- If C(insertbefore) is set, default value C(EOF) will be ignored.
- If regular expressions are passed to both C(regexp) and C(insertafter), C(insertafter) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertbefore).
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- Used with C(state=present).
- If specified, the line will be inserted before the last match of specified regular expression.
- If the first match is required, use C(firstmatch=yes).
- A value is available; C(BOF) for inserting the line at the beginning of the file.
- If specified regular expression has no matches, the line will be inserted at the end of the file.
- If regular expressions are passed to both C(regexp) and C(insertbefore), C(insertbefore) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertafter).
type: str
choices: [ BOF, '*regex*' ]
version_added: "1.1"
create:
description:
- Used with C(state=present).
- If specified, the file will be created if it does not already exist.
- By default it will fail if the file is missing.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
firstmatch:
description:
- Used with C(insertafter) or C(insertbefore).
- If set, C(insertafter) and C(insertbefore) will work with the first line that matches the given regular expression.
type: bool
default: no
version_added: "2.5"
others:
description:
- All arguments accepted by the M(ansible.builtin.file) module also work here.
type: str
extends_documentation_fragment:
- files
- validate
notes:
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
- Supports C(check_mode).
seealso:
- module: ansible.builtin.blockinfile
- module: ansible.builtin.copy
- module: ansible.builtin.file
- module: ansible.builtin.replace
- module: ansible.builtin.template
- module: community.windows.win_lineinfile
author:
- Daniel Hokka Zakrissoni (@dhozac)
- Ahti Kitsik (@ahtik)
'''
EXAMPLES = r'''
# NOTE: Before 2.3, option 'dest', 'destfile' or 'name' was used instead of 'path'
- name: Ensure SELinux is set to enforcing mode
ansible.builtin.lineinfile:
path: /etc/selinux/config
regexp: '^SELINUX='
line: SELINUX=enforcing
- name: Make sure group wheel is not in the sudoers configuration
ansible.builtin.lineinfile:
path: /etc/sudoers
state: absent
regexp: '^%wheel'
- name: Replace a localhost entry with our own
ansible.builtin.lineinfile:
path: /etc/hosts
regexp: '^127\.0\.0\.1'
line: 127.0.0.1 localhost
owner: root
group: root
mode: '0644'
- name: Ensure the default Apache port is 8080
ansible.builtin.lineinfile:
path: /etc/httpd/conf/httpd.conf
regexp: '^Listen '
insertafter: '^#Listen '
line: Listen 8080
- name: Ensure we have our own comment added to /etc/services
ansible.builtin.lineinfile:
path: /etc/services
regexp: '^# port for http'
insertbefore: '^www.*80/tcp'
line: '# port for http by default'
- name: Add a line to a file if the file does not exist, without passing regexp
ansible.builtin.lineinfile:
path: /tmp/testfile
line: 192.168.1.99 foo.lab.net foo
create: yes
# NOTE: Yaml requires escaping backslashes in double quotes but not in single quotes
- name: Ensure the JBoss memory settings are exactly as needed
ansible.builtin.lineinfile:
path: /opt/jboss-as/bin/standalone.conf
regexp: '^(.*)Xms(\d+)m(.*)$'
line: '\1Xms${xms}m\3'
backrefs: yes
# NOTE: Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs.
- name: Validate the sudoers file before saving
ansible.builtin.lineinfile:
path: /etc/sudoers
state: present
regexp: '^%ADMIN ALL='
line: '%ADMIN ALL=(ALL) NOPASSWD: ALL'
validate: /usr/sbin/visudo -cf %s
# See https://docs.python.org/3/library/re.html for further details on syntax
- name: Use backrefs with alternative group syntax to avoid conflicts with variable values
ansible.builtin.lineinfile:
path: /tmp/config
regexp: ^(host=).*
line: \g<1>{{ hostname }}
backrefs: yes
'''
RETURN = r'''#'''
import os
import re
import tempfile
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native, to_text
def write_changes(module, b_lines, dest):
tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir)
with os.fdopen(tmpfd, 'wb') as f:
f.writelines(b_lines)
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(to_bytes(validate % tmpfile, errors='surrogate_or_strict'))
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile,
to_native(os.path.realpath(to_bytes(dest, errors='surrogate_or_strict')), errors='surrogate_or_strict'),
unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def present(module, dest, regexp, line, insertafter, insertbefore, create,
backup, backrefs, firstmatch):
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
if not create:
module.fail_json(rc=257, msg='Destination %s does not exist !' % dest)
b_destpath = os.path.dirname(b_dest)
if b_destpath and not os.path.exists(b_destpath) and not module.check_mode:
try:
os.makedirs(b_destpath)
except Exception as e:
module.fail_json(msg='Error creating %s (%s)' % (to_text(b_destpath), to_text(e)))
b_lines = []
else:
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_m = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
if insertafter not in (None, 'BOF', 'EOF'):
bre_ins = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
bre_ins = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
bre_ins = None
# index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found
index = [-1, -1]
match = None
exact_line_match = False
b_line = to_bytes(line, errors='surrogate_or_strict')
# The module's doc says
# "If regular expressions are passed to both regexp and
# insertafter, insertafter is only honored if no match for regexp is found."
# Therefore:
# 1. regexp was found -> ignore insertafter, replace the founded line
# 2. regexp was not found -> insert the line after 'insertafter' or 'insertbefore' line
# Given the above:
# 1. First check that there is no match for regexp:
if regexp is not None:
for lineno, b_cur_line in enumerate(b_lines):
match_found = bre_m.search(b_cur_line)
if match_found:
index[0] = lineno
match = match_found
if firstmatch:
break
# 2. When no match found on the previous step,
# parse for searching insertafter/insertbefore:
if not match:
for lineno, b_cur_line in enumerate(b_lines):
if b_line == b_cur_line.rstrip(b'\r\n'):
index[0] = lineno
exact_line_match = True
elif bre_ins is not None and bre_ins.search(b_cur_line):
if insertafter:
# + 1 for the next line
index[1] = lineno + 1
if firstmatch:
break
if insertbefore:
# index[1] for the previous line
index[1] = lineno
if firstmatch:
break
msg = ''
changed = False
b_linesep = to_bytes(os.linesep, errors='surrogate_or_strict')
# Exact line or Regexp matched a line in the file
if index[0] != -1:
if backrefs and match:
b_new_line = match.expand(b_line)
else:
# Don't do backref expansion if not asked.
b_new_line = b_line
if not b_new_line.endswith(b_linesep):
b_new_line += b_linesep
# If no regexp was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter
if regexp is None and match is None and not exact_line_match:
# Insert lines
if insertafter and insertafter != 'EOF':
# Ensure there is a line separator after the found string
# at the end of the file.
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines[-1] = b_lines[-1] + b_linesep
# If the line to insert after is at the end of the file
# use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif insertbefore and insertbefore != 'BOF':
# If the line to insert before is at the beginning of the file
# use the appropriate index value.
if index[1] <= 0:
if b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[0]] != b_new_line:
b_lines[index[0]] = b_new_line
msg = 'line replaced'
changed = True
elif backrefs:
# Do absolutely nothing, since it's not safe generating the line
# without the regexp matching to populate the backrefs.
pass
# Add it to the beginning of the file
elif insertbefore == 'BOF' or insertafter == 'BOF':
b_lines.insert(0, b_line + b_linesep)
msg = 'line added'
changed = True
# Add it to the end of the file if requested or
# if insertafter/insertbefore didn't match anything
# (so default behaviour is to add at the end)
elif insertafter == 'EOF' or index[1] == -1:
# If the file is not empty then ensure there's a newline before the added line
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines.append(b_linesep)
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif insertafter and index[1] != -1:
# Don't insert the line if it already matches at the index.
# If the line to insert after is at the end of the file use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_line != b_lines[index[1]].rstrip(b'\n\r'):
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
# insert matched, but not the regexp
else:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup and os.path.exists(b_dest):
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if module.check_mode and not os.path.exists(b_dest):
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=difflist)
def absent(module, dest, regexp, line, backup):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
module.exit_json(changed=False, msg="file not present")
msg = ''
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_c = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
found = []
b_line = to_bytes(line, errors='surrogate_or_strict')
def matcher(b_cur_line):
if regexp is not None:
match_found = bre_c.search(b_cur_line)
else:
match_found = b_line == b_cur_line.rstrip(b'\r\n')
if match_found:
found.append(b_cur_line)
return not match_found
b_lines = [l for l in b_lines if matcher(l)]
changed = len(found) > 0
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup:
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if changed:
msg = "%s line(s) removed" % len(found)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, found=len(found), msg=msg, backup=backupdest, diff=difflist)
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
regexp=dict(type='str', aliases=['regex']),
line=dict(type='str', aliases=['value']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
backrefs=dict(type='bool', default=False),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
firstmatch=dict(type='bool', default=False),
validate=dict(type='str'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True,
)
params = module.params
create = params['create']
backup = params['backup']
backrefs = params['backrefs']
path = params['path']
firstmatch = params['firstmatch']
regexp = params['regexp']
line = params['line']
if regexp == '':
module.warn(
"The regular expression is an empty string, which will match every line in the file. "
"This may have unintended consequences, such as replacing the last line in the file rather than appending. "
"If this is desired, use '^' to match every line in the file and avoid this warning.")
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
module.fail_json(rc=256, msg='Path %s is a directory !' % path)
if params['state'] == 'present':
if backrefs and regexp is None:
module.fail_json(msg='regexp is required with backrefs=true')
if line is None:
module.fail_json(msg='line is required with state=present')
# Deal with the insertafter default value manually, to avoid errors
# because of the mutually_exclusive mechanism.
ins_bef, ins_aft = params['insertbefore'], params['insertafter']
if ins_bef is None and ins_aft is None:
ins_aft = 'EOF'
present(module, path, regexp, line,
ins_aft, ins_bef, create, backup, backrefs, firstmatch)
else:
if regexp is None and line is None:
module.fail_json(msg='one of line or regexp is required with state=absent')
absent(module, path, regexp, line, backup)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/files/teststring.conf
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/files/teststring.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/files/teststring_58923.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/tasks/main.yml
|
# test code for the lineinfile module
# (c) 2014, James Cammarata <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: deploy the test file for lineinfile
copy:
src: test.txt
dest: "{{ output_dir }}/test.txt"
register: result
- name: assert that the test file was deployed
assert:
that:
- result is changed
- "result.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'"
- "result.state == 'file'"
- name: insert a line at the beginning of the file, and back it up
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the beginning"
insertbefore: "BOF"
backup: yes
register: result1
- name: insert a line at the beginning of the file again
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the beginning"
insertbefore: "BOF"
register: result2
- name: assert that the line was inserted at the head of the file
assert:
that:
- result1 is changed
- result2 is not changed
- result1.msg == 'line added'
- result1.backup != ''
- name: stat the backup file
stat:
path: "{{ result1.backup }}"
register: result
- name: assert the backup file matches the previous hash
assert:
that:
- "result.stat.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'"
- name: stat the test after the insert at the head
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test hash is what we expect for the file with the insert at the head
assert:
that:
- "result.stat.checksum == '7eade4042b23b800958fe807b5bfc29f8541ec09'"
- name: insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after the insert at the end
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'fb57af7dc10a1006061b000f1f04c38e4bef50a9'"
- name: insert a line after the first line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line after line 1"
insertafter: "^This is line 1$"
register: result
- name: assert that the line was inserted after the first line
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after insert after the first line
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert after the first line
assert:
that:
- "result.stat.checksum == '5348da605b1bc93dbadf3a16474cdf22ef975bec'"
- name: insert a line before the last line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line before line 5"
insertbefore: "^This is line 5$"
register: result
- name: assert that the line was inserted before the last line
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after the insert before the last line
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert before the last line
assert:
that:
- "result.stat.checksum == '2e9e460ff68929e4453eb765761fd99814f6e286'"
- name: Replace a line with backrefs
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "This is line 3"
backrefs: yes
regexp: "^(REF) .* \\1$"
register: backrefs_result1
- name: Replace a line with backrefs again
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "This is line 3"
backrefs: yes
regexp: "^(REF) .* \\1$"
register: backrefs_result2
- command: cat {{ output_dir }}/test.txt
- name: assert that the line with backrefs was changed
assert:
that:
- backrefs_result1 is changed
- backrefs_result2 is not changed
- "backrefs_result1.msg == 'line replaced'"
- name: stat the test after the backref line was replaced
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '72f60239a735ae06e769d823f5c2b4232c634d9c'"
- name: remove the middle line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 3$"
register: result
- name: assert that the line was removed
assert:
that:
- result is changed
- "result.msg == '1 line(s) removed'"
- name: stat the test after the middle line was removed
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the middle line was removed
assert:
that:
- "result.stat.checksum == 'd4eeb07bdebab2d1cdb3ec4a3635afa2618ad4ea'"
- name: run a validation script that succeeds
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 5$"
validate: "true %s"
register: result
- name: assert that the file validated after removing a line
assert:
that:
- result is changed
- "result.msg == '1 line(s) removed'"
- name: stat the test after the validation succeeded
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the validation succeeded
assert:
that:
- "result.stat.checksum == 'ab56c210ea82839a54487464800fed4878cb2608'"
- name: run a validation script that fails
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 1$"
validate: "/bin/false %s"
register: result
ignore_errors: yes
- name: assert that the validate failed
assert:
that:
- "result.failed == true"
- name: stat the test after the validation failed
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches the previous after the validation failed
assert:
that:
- "result.stat.checksum == 'ab56c210ea82839a54487464800fed4878cb2608'"
- name: use create=yes
lineinfile:
dest: "{{ output_dir }}/new_test.txt"
create: yes
insertbefore: BOF
state: present
line: "This is a new file"
register: result
- name: assert that the new file was created
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: validate that the newly created file exists
stat:
path: "{{ output_dir }}/new_test.txt"
register: result
ignore_errors: yes
- name: assert the newly created test checksum matches
assert:
that:
- "result.stat.checksum == '038f10f9e31202451b093163e81e06fbac0c6f3a'"
- name: Create a file without a path
lineinfile:
dest: file.txt
create: yes
line: Test line
register: create_no_path_test
- name: Stat the file
stat:
path: file.txt
register: create_no_path_file
- name: Ensure file was created
assert:
that:
- create_no_path_test is changed
- create_no_path_file.stat.exists
# Test EOF in cases where file has no newline at EOF
- name: testnoeof deploy the file for lineinfile
copy:
src: testnoeof.txt
dest: "{{ output_dir }}/testnoeof.txt"
register: result
- name: testnoeof insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/testnoeof.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: testempty assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: insert a multiple lines at the end of the file
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "This is a line\nwith \\n character"
insertafter: "EOF"
register: result
- name: assert that the multiple lines was inserted
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: testnoeof stat the no newline EOF test after the insert at the end
stat:
path: "{{ output_dir }}/testnoeof.txt"
register: result
- name: testnoeof assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'f9af7008e3cb67575ce653d094c79cabebf6e523'"
# Test EOF with empty file to make sure no unnecessary newline is added
- name: testempty deploy the testempty file for lineinfile
copy:
src: testempty.txt
dest: "{{ output_dir }}/testempty.txt"
register: result
- name: testempty insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/testempty.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: testempty assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: testempty stat the test after the insert at the end
stat:
path: "{{ output_dir }}/testempty.txt"
register: result
- name: testempty assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'f440dc65ea9cec3fd496c1479ddf937e1b949412'"
- stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after inserting multiple lines
assert:
that:
- "result.stat.checksum == 'fde683229429a4f05d670e6c10afc875e1d5c489'"
- name: replace a line with backrefs included in the line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New \\1 created with the backref"
backrefs: yes
regexp: "^This is (line 4)$"
register: result
- name: assert that the line with backrefs was changed
assert:
that:
- result is changed
- "result.msg == 'line replaced'"
- name: stat the test after the backref line was replaced
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '981ad35c4b30b03bc3a1beedce0d1e72c491898e'"
###################################################################
# issue 8535
- name: create a new file for testing quoting issues
file:
dest: "{{ output_dir }}/test_quoting.txt"
state: touch
register: result
- name: assert the new file was created
assert:
that:
- result is changed
- name: use with_items to add code-like strings to the quoting txt file
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "{{ item }}"
insertbefore: BOF
with_items:
- "'foo'"
- "dotenv.load();"
- "var dotenv = require('dotenv');"
register: result
- name: assert the quote test file was modified correctly
assert:
that:
- result.results|length == 3
- result.results[0] is changed
- result.results[0].item == "'foo'"
- result.results[1] is changed
- result.results[1].item == "dotenv.load();"
- result.results[2] is changed
- result.results[2].item == "var dotenv = require('dotenv');"
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '7dc3cb033c3971e73af0eaed6623d4e71e5743f1'"
- name: insert a line into the quoted file with a single quote
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "import g'"
register: result
- name: assert that the quoted file was changed
assert:
that:
- result is changed
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '73b271c2cc1cef5663713bc0f00444b4bf9f4543'"
- name: insert a line into the quoted file with many double quotation strings
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "\"quote\" and \"unquote\""
register: result
- name: assert that the quoted file was changed
assert:
that:
- result is changed
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == 'b10ab2a3c3b6492680c8d0b1d6f35aa6b8f9e731'"
###################################################################
# Issue 28721
- name: Deploy the testmultiple file
copy:
src: testmultiple.txt
dest: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the testmultiple file was deployed
assert:
that:
- result is changed
- result.checksum == '3e0090a34fb641f3c01e9011546ff586260ea0ea'
- result.state == 'file'
# Test insertafter
- name: Write the same line to a file inserted after different lines
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertafter: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_1
with_items: "{{ test_regexp }}"
- name: Assert that the line is added once only
assert:
that:
- _multitest_1.results.0 is changed
- _multitest_1.results.1 is not changed
- _multitest_1.results.2 is not changed
- _multitest_1.results.3 is not changed
- name: Do the same thing again to check for changes
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertafter: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_2
with_items: "{{ test_regexp }}"
- name: Assert that the line is not added anymore
assert:
that:
- _multitest_2.results.0 is not changed
- _multitest_2.results.1 is not changed
- _multitest_2.results.2 is not changed
- _multitest_2.results.3 is not changed
- name: Stat the insertafter file
stat:
path: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the insertafter file matches expected checksum
assert:
that:
- result.stat.checksum == 'c6733b6c53ddd0e11e6ba39daa556ef8f4840761'
# Test insertbefore
- name: Deploy the testmultiple file
copy:
src: testmultiple.txt
dest: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the testmultiple file was deployed
assert:
that:
- result is changed
- result.checksum == '3e0090a34fb641f3c01e9011546ff586260ea0ea'
- result.state == 'file'
- name: Write the same line to a file inserted before different lines
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertbefore: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_3
with_items: "{{ test_regexp }}"
- name: Assert that the line is added once only
assert:
that:
- _multitest_3.results.0 is changed
- _multitest_3.results.1 is not changed
- _multitest_3.results.2 is not changed
- _multitest_3.results.3 is not changed
- name: Do the same thing again to check for changes
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertbefore: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_4
with_items: "{{ test_regexp }}"
- name: Assert that the line is not added anymore
assert:
that:
- _multitest_4.results.0 is not changed
- _multitest_4.results.1 is not changed
- _multitest_4.results.2 is not changed
- _multitest_4.results.3 is not changed
- name: Stat the insertbefore file
stat:
path: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the insertbefore file matches expected checksum
assert:
that:
- result.stat.checksum == '5d298651fbc377b45257da10308a9dc2fe1f8be5'
###################################################################
# Issue 36156
# Test insertbefore and insertafter with regexp
- name: Deploy the test.conf file
copy:
src: test.conf
dest: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the test.conf file was deployed
assert:
that:
- result is changed
- result.checksum == '6037f13e419b132eb3fd20a89e60c6c87a6add38'
- result.state == 'file'
# Test instertafter
- name: Insert lines after with regexp
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_5
- name: Do the same thing again and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_6
- name: Assert that the file was changed the first time but not the second time
assert:
that:
- item.0 is changed
- item.1 is not changed
with_together:
- "{{ _multitest_5.results }}"
- "{{ _multitest_6.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == '06e2c456e5028dd7bcd0b117b5927a1139458c82'
- name: Do the same thing a third time without regexp and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_7
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file was changed when no regexp was provided
assert:
that:
- item is not changed
with_items: "{{ _multitest_7.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == '06e2c456e5028dd7bcd0b117b5927a1139458c82'
# Test insertbefore
- name: Deploy the test.conf file
copy:
src: test.conf
dest: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the test.conf file was deployed
assert:
that:
- result is changed
- result.checksum == '6037f13e419b132eb3fd20a89e60c6c87a6add38'
- result.state == 'file'
- name: Insert lines before with regexp
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_8
- name: Do the same thing again and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_9
- name: Assert that the file was changed the first time but not the second time
assert:
that:
- item.0 is changed
- item.1 is not changed
with_together:
- "{{ _multitest_8.results }}"
- "{{ _multitest_9.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == 'c3be9438a07c44d4c256cebfcdbca15a15b1db91'
- name: Do the same thing a third time without regexp and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_10
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file was changed when no regexp was provided
assert:
that:
- item is not changed
with_items: "{{ _multitest_10.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == 'c3be9438a07c44d4c256cebfcdbca15a15b1db91'
- name: Copy empty file to test with insertbefore
copy:
src: testempty.txt
dest: "{{ output_dir }}/testempty.txt"
- name: Add a line to empty file with insertbefore
lineinfile:
path: "{{ output_dir }}/testempty.txt"
line: top
insertbefore: '^not in the file$'
register: oneline_insbefore_test1
- name: Add a line to file with only one line using insertbefore
lineinfile:
path: "{{ output_dir }}/testempty.txt"
line: top
insertbefore: '^not in the file$'
register: oneline_insbefore_test2
- name: Stat the file
stat:
path: "{{ output_dir }}/testempty.txt"
register: oneline_insbefore_file
- name: Assert that insertebefore worked properly with a one line file
assert:
that:
- oneline_insbefore_test1 is changed
- oneline_insbefore_test2 is not changed
- oneline_insbefore_file.stat.checksum == '4dca56d05a21f0d018cd311f43e134e4501cf6d9'
###################################################################
# Issue 29443
# When using an empty regexp, replace the last line (since it matches every line)
# but also provide a warning.
- name: Deploy the test file for lineinfile
copy:
src: test.txt
dest: "{{ output_dir }}/test.txt"
register: result
- name: Assert that the test file was deployed
assert:
that:
- result is changed
- result.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'
- result.state == 'file'
- name: Insert a line in the file using an empty string as a regular expression
lineinfile:
path: "{{ output_dir }}/test.txt"
regexp: ''
line: This is line 6
register: insert_empty_regexp
- name: Stat the file
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: Assert that the file contents match what is expected and a warning was displayed
assert:
that:
- insert_empty_regexp is changed
- warning_message in insert_empty_regexp.warnings
- result.stat.checksum == '23555a98ceaa88756b4c7c7bba49d9f86eed868f'
vars:
warning_message: >-
The regular expression is an empty string, which will match every line in the file.
This may have unintended consequences, such as replacing the last line in the file rather than appending.
If this is desired, use '^' to match every line in the file and avoid this warning.
###################################################################
## Issue #58923
## Using firstmatch with insertafter and ensure multiple lines are not inserted
- name: Deploy the firstmatch test file
copy:
src: firstmatch.txt
dest: "{{ output_dir }}/firstmatch.txt"
register: result
- name: Assert that the test file was deployed
assert:
that:
- result is changed
- result.checksum == '1d644e5e2e51c67f1bd12d7bbe2686017f39923d'
- result.state == 'file'
- name: Insert a line before an existing line using firstmatch
lineinfile:
path: "{{ output_dir }}/firstmatch.txt"
line: INSERT
insertafter: line1
firstmatch: yes
register: insertafter1
- name: Insert a line before an existing line using firstmatch again
lineinfile:
path: "{{ output_dir }}/firstmatch.txt"
line: INSERT
insertafter: line1
firstmatch: yes
register: insertafter2
- name: Stat the file
stat:
path: "{{ output_dir }}/firstmatch.txt"
register: result
- name: Assert that the file was modified appropriately
assert:
that:
- insertafter1 is changed
- insertafter2 is not changed
- result.stat.checksum == '114aae024073a3ee8ec8db0ada03c5483326dd86'
########################################################################################
# Tests of fixing the same issue as above (#58923) by @Andersson007 <[email protected]>
# and @samdoran <[email protected]>:
# Test insertafter with regexp
- name: Deploy the test file
copy:
src: test_58923.txt
dest: "{{ output_dir }}/test_58923.txt"
register: initial_file
- name: Assert that the test file was deployed
assert:
that:
- initial_file is changed
- initial_file.checksum == 'b6379ba43261c451a62102acb2c7f438a177c66e'
- initial_file.state == 'file'
# Regarding the documentation:
# If regular expressions are passed to both regexp and
# insertafter, insertafter is only honored if no match for regexp is found.
# Therefore,
# when regular expressions are passed to both regexp and insertafter, then:
# 1. regexp was found -> ignore insertafter, replace the founded line
# 2. regexp was not found -> insert the line after 'insertafter' line
# Regexp is not present in the file, so the line must be inserted after ^#!/bin/sh
- name: Add the line using firstmatch, regexp, and insertafter
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertafter_test1
- name: Stat the file
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertafter_test1_file
- name: Add the line using firstmatch, regexp, and insertafter again
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertafter_test2
# Check of the prev step.
# We tried to add the same line with the same playbook,
# so nothing has been added:
- name: Stat the file again
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertafter_test2_file
- name: Assert insertafter tests gave the expected results
assert:
that:
- insertafter_test1 is changed
- insertafter_test1_file.stat.checksum == '9232aed6fe88714964d9e29d13e42cd782070b08'
- insertafter_test2 is not changed
- insertafter_test2_file.stat.checksum == '9232aed6fe88714964d9e29d13e42cd782070b08'
# Test insertafter without regexp
- name: Deploy the test file
copy:
src: test_58923.txt
dest: "{{ output_dir }}/test_58923.txt"
register: initial_file
- name: Assert that the test file was deployed
assert:
that:
- initial_file is changed
- initial_file.checksum == 'b6379ba43261c451a62102acb2c7f438a177c66e'
- initial_file.state == 'file'
- name: Insert the line using firstmatch and insertafter without regexp
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertafter: '^#!/bin/sh'
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertafter_test3
- name: Stat the file
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertafter_test3_file
- name: Insert the line using firstmatch and insertafter without regexp again
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertafter: '^#!/bin/sh'
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertafter_test4
- name: Stat the file again
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertafter_test4_file
- name: Assert insertafter without regexp tests gave the expected results
assert:
that:
- insertafter_test3 is changed
- insertafter_test3_file.stat.checksum == '9232aed6fe88714964d9e29d13e42cd782070b08'
- insertafter_test4 is not changed
- insertafter_test4_file.stat.checksum == '9232aed6fe88714964d9e29d13e42cd782070b08'
# Test insertbefore with regexp
- name: Deploy the test file
copy:
src: test_58923.txt
dest: "{{ output_dir }}/test_58923.txt"
register: initial_file
- name: Assert that the test file was deployed
assert:
that:
- initial_file is changed
- initial_file.checksum == 'b6379ba43261c451a62102acb2c7f438a177c66e'
- initial_file.state == 'file'
- name: Add the line using regexp, firstmatch, and insertbefore
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertbefore: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertbefore_test1
- name: Stat the file
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertbefore_test1_file
- name: Add the line using regexp, firstmatch, and insertbefore again
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertbefore: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertbefore_test2
- name: Stat the file again
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertbefore_test2_file
- name: Assert insertbefore with regexp tests gave the expected results
assert:
that:
- insertbefore_test1 is changed
- insertbefore_test1_file.stat.checksum == '3c6630b9d44f561ea9ad999be56a7504cadc12f7'
- insertbefore_test2 is not changed
- insertbefore_test2_file.stat.checksum == '3c6630b9d44f561ea9ad999be56a7504cadc12f7'
# Test insertbefore without regexp
- name: Deploy the test file
copy:
src: test_58923.txt
dest: "{{ output_dir }}/test_58923.txt"
register: initial_file
- name: Assert that the test file was deployed
assert:
that:
- initial_file is changed
- initial_file.checksum == 'b6379ba43261c451a62102acb2c7f438a177c66e'
- initial_file.state == 'file'
- name: Add the line using insertbefore and firstmatch
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertbefore: '^#!/bin/sh'
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertbefore_test3
- name: Stat the file
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertbefore_test3_file
- name: Add the line using insertbefore and firstmatch again
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertbefore: '^#!/bin/sh'
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertbefore_test4
- name: Stat the file again
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertbefore_test4_file
# Test when the line is presented in the file but
# not in the before/after spot and it does match the regexp:
- name: >
Add the line using insertbefore and firstmatch when the regexp line
is presented but not close to insertbefore spot
lineinfile:
path: "{{ output_dir }}/test_58923.txt"
insertbefore: ' Darwin\*\) if \[ -z \"\$JAVA_HOME\" \] ; then'
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
register: insertbefore_test5
- name: Stat the file again
stat:
path: "{{ output_dir }}/test_58923.txt"
register: insertbefore_test5_file
- name: Assert insertbefore with regexp tests gave the expected results
assert:
that:
- insertbefore_test3 is changed
- insertbefore_test3_file.stat.checksum == '3c6630b9d44f561ea9ad999be56a7504cadc12f7'
- insertbefore_test4 is not changed
- insertbefore_test4_file.stat.checksum == '3c6630b9d44f561ea9ad999be56a7504cadc12f7'
- insertbefore_test5 is not changed
- insertbefore_test5_file.stat.checksum == '3c6630b9d44f561ea9ad999be56a7504cadc12f7'
# Test inserting a line at the end of the file using regexp with insertafter
# https://github.com/ansible/ansible/issues/63684
- name: Create a file by inserting a line
lineinfile:
path: "{{ output_dir }}/testend.txt"
create: yes
line: testline
register: testend1
- name: Insert a line at the end of the file
lineinfile:
path: "{{ output_dir }}/testend.txt"
insertafter: testline
regexp: line at the end
line: line at the end
register: testend2
- name: Stat the file
stat:
path: "{{ output_dir }}/testend.txt"
register: testend_file
- name: Assert inserting at the end gave the expected results.
assert:
that:
- testend1 is changed
- testend2 is changed
- testend_file.stat.checksum == 'ef36116966836ce04f6b249fd1837706acae4e19'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/tasks/test_string01.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 70,470 |
RFE: lineinfile should support a pure (non-regex) line
|
---
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I want to be able to use a pure (non-regex) line with lineinfile. Without accidentally escaping or not escaping characters.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
lineinfile
##### ADDITIONAL INFORMATION
I have a line in a file which has lots of special characters. I want to use Ansible's lineinfile module.
Ansible's lineinfile module requires a regular expression, which means lots of escaping for me, and a role or playbook that is more difficult to read.
Could the module support a non-regex line too?
Example of problem regex (have fun escaping this one! - real world example):
```yaml
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '# "${distro_id}:${distro_codename}-updates";'
line: ' "${distro_id}:${distro_codename}-updates";'
```
|
https://github.com/ansible/ansible/issues/70470
|
https://github.com/ansible/ansible/pull/70647
|
9a9272305a7b09f84861c7061f57945ae9ad7090
|
69631da889e21a5513916b62d72c115064b7660b
| 2020-07-06T10:37:09Z |
python
| 2021-02-02T20:37:06Z |
test/integration/targets/lineinfile/tasks/test_string02.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
changelogs/fragments/64625-show-file-path-on-task-failure-callback-option.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
lib/ansible/plugins/callback/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import difflib
import json
import os
import sys
import warnings
from copy import deepcopy
from ansible import constants as C
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.module_utils.six import PY3
from ansible.module_utils._text import to_text
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins import AnsiblePlugin, get_plugin_class
from ansible.utils.color import stringc
from ansible.utils.display import Display
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
if PY3:
# OrderedDict is needed for a backwards compat shim on Python3.x only
# https://github.com/ansible/ansible/pull/49512
from collections import OrderedDict
else:
OrderedDict = None
global_display = Display()
__all__ = ["CallbackBase"]
_DEBUG_ALLOWED_KEYS = frozenset(('msg', 'exception', 'warnings', 'deprecations'))
class CallbackBase(AnsiblePlugin):
'''
This is a base ansible callback class that does nothing. New callbacks should
use this class as a base and override any callback methods they wish to execute
custom actions.
'''
def __init__(self, display=None, options=None):
if display:
self._display = display
else:
self._display = global_display
if self._display.verbosity >= 4:
name = getattr(self, 'CALLBACK_NAME', 'unnamed')
ctype = getattr(self, 'CALLBACK_TYPE', 'old')
version = getattr(self, 'CALLBACK_VERSION', '1.0')
self._display.vvvv('Loading callback plugin %s of type %s, v%s from %s' % (name, ctype, version, sys.modules[self.__module__].__file__))
self.disabled = False
self.wants_implicit_tasks = False
self._plugin_options = {}
if options is not None:
self.set_options(options)
self._hide_in_debug = ('changed', 'failed', 'skipped', 'invocation', 'skip_reason')
''' helper for callbacks, so they don't all have to include deepcopy '''
_copy_result = deepcopy
def set_option(self, k, v):
self._plugin_options[k] = v
def get_option(self, k):
return self._plugin_options[k]
def set_options(self, task_keys=None, var_options=None, direct=None):
''' This is different than the normal plugin method as callbacks get called early and really don't accept keywords.
Also _options was already taken for CLI args and callbacks use _plugin_options instead.
'''
# load from config
self._plugin_options = C.config.get_plugin_options(get_plugin_class(self), self._load_name, keys=task_keys, variables=var_options, direct=direct)
def _run_is_verbose(self, result, verbosity=0):
return ((self._display.verbosity > verbosity or result._result.get('_ansible_verbose_always', False) is True)
and result._result.get('_ansible_verbose_override', False) is False)
def _dump_results(self, result, indent=None, sort_keys=True, keep_invocation=False):
if not indent and (result.get('_ansible_verbose_always') or self._display.verbosity > 2):
indent = 4
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
abridged_result = strip_internal_keys(module_response_deepcopy(result))
# remove invocation unless specifically wanting it
if not keep_invocation and self._display.verbosity < 3 and 'invocation' in result:
del abridged_result['invocation']
# remove diff information from screen output
if self._display.verbosity < 3 and 'diff' in result:
del abridged_result['diff']
# remove exception from screen output
if 'exception' in abridged_result:
del abridged_result['exception']
try:
jsonified_results = json.dumps(abridged_result, cls=AnsibleJSONEncoder, indent=indent, ensure_ascii=False, sort_keys=sort_keys)
except TypeError:
# Python3 bug: throws an exception when keys are non-homogenous types:
# https://bugs.python.org/issue25457
# sort into an OrderedDict and then json.dumps() that instead
if not OrderedDict:
raise
jsonified_results = json.dumps(OrderedDict(sorted(abridged_result.items(), key=to_text)),
cls=AnsibleJSONEncoder, indent=indent,
ensure_ascii=False, sort_keys=False)
return jsonified_results
def _handle_warnings(self, res):
''' display warnings, if enabled and any exist in the result '''
if C.ACTION_WARNINGS:
if 'warnings' in res and res['warnings']:
for warning in res['warnings']:
self._display.warning(warning)
del res['warnings']
if 'deprecations' in res and res['deprecations']:
for warning in res['deprecations']:
self._display.deprecated(**warning)
del res['deprecations']
def _handle_exception(self, result, use_stderr=False):
if 'exception' in result:
msg = "An exception occurred during task execution. "
if self._display.verbosity < 3:
# extract just the actual error message from the exception text
error = result['exception'].strip().split('\n')[-1]
msg += "To see the full traceback, use -vvv. The error was: %s" % error
else:
msg = "The full traceback is:\n" + result['exception']
del result['exception']
self._display.display(msg, color=C.COLOR_ERROR, stderr=use_stderr)
def _serialize_diff(self, diff):
return json.dumps(diff, sort_keys=True, indent=4, separators=(u',', u': ')) + u'\n'
def _get_diff(self, difflist):
if not isinstance(difflist, list):
difflist = [difflist]
ret = []
for diff in difflist:
if 'dst_binary' in diff:
ret.append(u"diff skipped: destination file appears to be binary\n")
if 'src_binary' in diff:
ret.append(u"diff skipped: source file appears to be binary\n")
if 'dst_larger' in diff:
ret.append(u"diff skipped: destination file size is greater than %d\n" % diff['dst_larger'])
if 'src_larger' in diff:
ret.append(u"diff skipped: source file size is greater than %d\n" % diff['src_larger'])
if 'before' in diff and 'after' in diff:
# format complex structures into 'files'
for x in ['before', 'after']:
if isinstance(diff[x], MutableMapping):
diff[x] = self._serialize_diff(diff[x])
elif diff[x] is None:
diff[x] = ''
if 'before_header' in diff:
before_header = u"before: %s" % diff['before_header']
else:
before_header = u'before'
if 'after_header' in diff:
after_header = u"after: %s" % diff['after_header']
else:
after_header = u'after'
before_lines = diff['before'].splitlines(True)
after_lines = diff['after'].splitlines(True)
if before_lines and not before_lines[-1].endswith(u'\n'):
before_lines[-1] += u'\n\\ No newline at end of file\n'
if after_lines and not after_lines[-1].endswith('\n'):
after_lines[-1] += u'\n\\ No newline at end of file\n'
differ = difflib.unified_diff(before_lines,
after_lines,
fromfile=before_header,
tofile=after_header,
fromfiledate=u'',
tofiledate=u'',
n=C.DIFF_CONTEXT)
difflines = list(differ)
if len(difflines) >= 3 and sys.version_info[:2] == (2, 6):
# difflib in Python 2.6 adds trailing spaces after
# filenames in the -- before/++ after headers.
difflines[0] = difflines[0].replace(u' \n', u'\n')
difflines[1] = difflines[1].replace(u' \n', u'\n')
# it also treats empty files differently
difflines[2] = difflines[2].replace(u'-1,0', u'-0,0').replace(u'+1,0', u'+0,0')
has_diff = False
for line in difflines:
has_diff = True
if line.startswith(u'+'):
line = stringc(line, C.COLOR_DIFF_ADD)
elif line.startswith(u'-'):
line = stringc(line, C.COLOR_DIFF_REMOVE)
elif line.startswith(u'@@'):
line = stringc(line, C.COLOR_DIFF_LINES)
ret.append(line)
if has_diff:
ret.append('\n')
if 'prepared' in diff:
ret.append(diff['prepared'])
return u''.join(ret)
def _get_item_label(self, result):
''' retrieves the value to be displayed as a label for an item entry from a result object'''
if result.get('_ansible_no_log', False):
item = "(censored due to no_log)"
else:
item = result.get('_ansible_item_label', result.get('item'))
return item
def _process_items(self, result):
# just remove them as now they get handled by individual callbacks
del result._result['results']
def _clean_results(self, result, task_name):
''' removes data from results for display '''
# mostly controls that debug only outputs what it was meant to
if task_name in C._ACTION_DEBUG:
if 'msg' in result:
# msg should be alone
for key in list(result.keys()):
if key not in _DEBUG_ALLOWED_KEYS and not key.startswith('_'):
result.pop(key)
else:
# 'var' value as field, so eliminate others and what is left should be varname
for hidme in self._hide_in_debug:
result.pop(hidme, None)
def set_play_context(self, play_context):
pass
def on_any(self, *args, **kwargs):
pass
def runner_on_failed(self, host, res, ignore_errors=False):
pass
def runner_on_ok(self, host, res):
pass
def runner_on_skipped(self, host, item=None):
pass
def runner_on_unreachable(self, host, res):
pass
def runner_on_no_hosts(self):
pass
def runner_on_async_poll(self, host, res, jid, clock):
pass
def runner_on_async_ok(self, host, res, jid):
pass
def runner_on_async_failed(self, host, res, jid):
pass
def playbook_on_start(self):
pass
def playbook_on_notify(self, host, handler):
pass
def playbook_on_no_hosts_matched(self):
pass
def playbook_on_no_hosts_remaining(self):
pass
def playbook_on_task_start(self, name, is_conditional):
pass
def playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
pass
def playbook_on_setup(self):
pass
def playbook_on_import_for_host(self, host, imported_file):
pass
def playbook_on_not_import_for_host(self, host, missing_file):
pass
def playbook_on_play_start(self, name):
pass
def playbook_on_stats(self, stats):
pass
def on_file_diff(self, host, diff):
pass
# V2 METHODS, by default they call v1 counterparts if possible
def v2_on_any(self, *args, **kwargs):
self.on_any(args, kwargs)
def v2_runner_on_failed(self, result, ignore_errors=False):
host = result._host.get_name()
self.runner_on_failed(host, result._result, ignore_errors)
def v2_runner_on_ok(self, result):
host = result._host.get_name()
self.runner_on_ok(host, result._result)
def v2_runner_on_skipped(self, result):
if C.DISPLAY_SKIPPED_HOSTS:
host = result._host.get_name()
self.runner_on_skipped(host, self._get_item_label(getattr(result._result, 'results', {})))
def v2_runner_on_unreachable(self, result):
host = result._host.get_name()
self.runner_on_unreachable(host, result._result)
# FIXME: not called
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
# FIXME, get real clock
clock = 0
self.runner_on_async_poll(host, result._result, jid, clock)
# FIXME: not called
def v2_runner_on_async_ok(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.runner_on_async_ok(host, result._result, jid)
# FIXME: not called
def v2_runner_on_async_failed(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.runner_on_async_failed(host, result._result, jid)
def v2_playbook_on_start(self, playbook):
self.playbook_on_start()
def v2_playbook_on_notify(self, handler, host):
self.playbook_on_notify(host, handler)
def v2_playbook_on_no_hosts_matched(self):
self.playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
self.playbook_on_no_hosts_remaining()
def v2_playbook_on_task_start(self, task, is_conditional):
self.playbook_on_task_start(task.name, is_conditional)
# FIXME: not called
def v2_playbook_on_cleanup_task_start(self, task):
pass # no v1 correspondence
def v2_playbook_on_handler_task_start(self, task):
pass # no v1 correspondence
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
self.playbook_on_vars_prompt(varname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe)
# FIXME: not called
def v2_playbook_on_import_for_host(self, result, imported_file):
host = result._host.get_name()
self.playbook_on_import_for_host(host, imported_file)
# FIXME: not called
def v2_playbook_on_not_import_for_host(self, result, missing_file):
host = result._host.get_name()
self.playbook_on_not_import_for_host(host, missing_file)
def v2_playbook_on_play_start(self, play):
self.playbook_on_play_start(play.name)
def v2_playbook_on_stats(self, stats):
self.playbook_on_stats(stats)
def v2_on_file_diff(self, result):
if 'diff' in result._result:
host = result._host.get_name()
self.on_file_diff(host, result._result['diff'])
def v2_playbook_on_include(self, included_file):
pass # no v1 correspondence
def v2_runner_item_on_ok(self, result):
pass
def v2_runner_item_on_failed(self, result):
pass
def v2_runner_item_on_skipped(self, result):
pass
def v2_runner_retry(self, result):
pass
def v2_runner_on_start(self, host, task):
"""Event used when host begins execution of a task
.. versionadded:: 2.8
"""
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
lib/ansible/plugins/callback/default.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: default
type: stdout
short_description: default Ansible screen output
version_added: historical
description:
- This is the default output callback for ansible-playbook.
extends_documentation_fragment:
- default_callback
requirements:
- set as stdout in configuration
'''
from ansible import constants as C
from ansible import context
from ansible.playbook.task_include import TaskInclude
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
# These values use ansible.constants for historical reasons, mostly to allow
# unmodified derivative plugins to work. However, newer options added to the
# plugin are not also added to ansible.constants, so authors of derivative
# callback plugins will eventually need to add a reference to the common docs
# fragment for the 'default' callback plugin
# these are used to provide backwards compat with old plugins that subclass from default
# but still don't use the new config system and/or fail to document the options
# TODO: Change the default of check_mode_markers to True in a future release (2.13)
COMPAT_OPTIONS = (('display_skipped_hosts', C.DISPLAY_SKIPPED_HOSTS),
('display_ok_hosts', True),
('show_custom_stats', C.SHOW_CUSTOM_STATS),
('display_failed_stderr', False),
('check_mode_markers', False),
('show_per_host_start', False))
class CallbackModule(CallbackBase):
'''
This is the default callback interface, which simply prints messages
to stdout when new callback events are received.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'default'
def __init__(self):
self._play = None
self._last_task_banner = None
self._last_task_name = None
self._task_type_cache = {}
super(CallbackModule, self).__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# for backwards compat with plugins subclassing default, fallback to constants
for option, constant in COMPAT_OPTIONS:
try:
value = self.get_option(option)
except (AttributeError, KeyError):
self._display.deprecated("'%s' is subclassing DefaultCallback without the corresponding doc_fragment." % self._load_name,
version='2.14', collection_name='ansible.builtin')
value = constant
setattr(self, option, value)
def v2_runner_on_failed(self, result, ignore_errors=False):
delegated_vars = result._result.get('_ansible_delegated_vars', None)
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._handle_exception(result._result, use_stderr=self.display_failed_stderr)
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
if delegated_vars:
self._display.display("fatal: [%s -> %s]: FAILED! => %s" % (result._host.get_name(), delegated_vars['ansible_host'],
self._dump_results(result._result)),
color=C.COLOR_ERROR, stderr=self.display_failed_stderr)
else:
self._display.display("fatal: [%s]: FAILED! => %s" % (result._host.get_name(), self._dump_results(result._result)),
color=C.COLOR_ERROR, stderr=self.display_failed_stderr)
if ignore_errors:
self._display.display("...ignoring", color=C.COLOR_SKIP)
def v2_runner_on_ok(self, result):
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if isinstance(result._task, TaskInclude):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if delegated_vars:
msg = "changed: [%s -> %s]" % (result._host.get_name(), delegated_vars['ansible_host'])
else:
msg = "changed: [%s]" % result._host.get_name()
color = C.COLOR_CHANGED
else:
if not self.display_ok_hosts:
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if delegated_vars:
msg = "ok: [%s -> %s]" % (result._host.get_name(), delegated_vars['ansible_host'])
else:
msg = "ok: [%s]" % result._host.get_name()
color = C.COLOR_OK
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % (self._dump_results(result._result),)
self._display.display(msg, color=color)
def v2_runner_on_skipped(self, result):
if self.display_skipped_hosts:
self._clean_results(result._result, result._task.action)
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if result._task.loop and 'results' in result._result:
self._process_items(result)
else:
msg = "skipping: [%s]" % result._host.get_name()
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_runner_on_unreachable(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if delegated_vars:
msg = "fatal: [%s -> %s]: UNREACHABLE! => %s" % (result._host.get_name(), delegated_vars['ansible_host'], self._dump_results(result._result))
else:
msg = "fatal: [%s]: UNREACHABLE! => %s" % (result._host.get_name(), self._dump_results(result._result))
self._display.display(msg, color=C.COLOR_UNREACHABLE, stderr=self.display_failed_stderr)
def v2_playbook_on_no_hosts_matched(self):
self._display.display("skipping: no hosts matched", color=C.COLOR_SKIP)
def v2_playbook_on_no_hosts_remaining(self):
self._display.banner("NO MORE HOSTS LEFT")
def v2_playbook_on_task_start(self, task, is_conditional):
self._task_start(task, prefix='TASK')
def _task_start(self, task, prefix=None):
# Cache output prefix for task if provided
# This is needed to properly display 'RUNNING HANDLER' and similar
# when hiding skipped/ok task results
if prefix is not None:
self._task_type_cache[task._uuid] = prefix
# Preserve task name, as all vars may not be available for templating
# when we need it later
if self._play.strategy in ('free', 'host_pinned'):
# Explicitly set to None for strategy free/host_pinned to account for any cached
# task title from a previous non-free play
self._last_task_name = None
else:
self._last_task_name = task.get_name().strip()
# Display the task banner immediately if we're not doing any filtering based on task result
if self.display_skipped_hosts and self.display_ok_hosts:
self._print_task_banner(task)
def _print_task_banner(self, task):
# args can be specified as no_log in several places: in the task or in
# the argument spec. We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
# machine and we haven't run it thereyet at this time.
#
# So we give people a config option to affect display of the args so
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
args = ''
if not task.no_log and C.DISPLAY_ARGS_TO_STDOUT:
args = u', '.join(u'%s=%s' % a for a in task.args.items())
args = u' %s' % args
prefix = self._task_type_cache.get(task._uuid, 'TASK')
# Use cached task name
task_name = self._last_task_name
if task_name is None:
task_name = task.get_name().strip()
if task.check_mode and self.check_mode_markers:
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
self._display.banner(u"%s [%s%s]%s" % (prefix, task_name, args, checkmsg))
if self._display.verbosity >= 2:
path = task.get_path()
if path:
self._display.display(u"task path: %s" % path, color=C.COLOR_DEBUG)
self._last_task_banner = task._uuid
def v2_playbook_on_cleanup_task_start(self, task):
self._task_start(task, prefix='CLEANUP TASK')
def v2_playbook_on_handler_task_start(self, task):
self._task_start(task, prefix='RUNNING HANDLER')
def v2_runner_on_start(self, host, task):
if self.get_option('show_per_host_start'):
self._display.display(" [started %s on %s]" % (task, host), color=C.COLOR_OK)
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if play.check_mode and self.check_mode_markers:
checkmsg = " [CHECK MODE]"
else:
checkmsg = ""
if not name:
msg = u"PLAY%s" % checkmsg
else:
msg = u"PLAY [%s]%s" % (name, checkmsg)
self._play = play
self._display.banner(msg)
def v2_on_file_diff(self, result):
if result._task.loop and 'results' in result._result:
for res in result._result['results']:
if 'diff' in res and res['diff'] and res.get('changed', False):
diff = self._get_diff(res['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
diff = self._get_diff(result._result['diff'])
if diff:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._display.display(diff)
def v2_runner_item_on_ok(self, result):
delegated_vars = result._result.get('_ansible_delegated_vars', None)
if isinstance(result._task, TaskInclude):
return
elif result._result.get('changed', False):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'changed'
color = C.COLOR_CHANGED
else:
if not self.display_ok_hosts:
return
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
msg = 'ok'
color = C.COLOR_OK
if delegated_vars:
msg += ": [%s -> %s]" % (result._host.get_name(), delegated_vars['ansible_host'])
else:
msg += ": [%s]" % result._host.get_name()
msg += " => (item=%s)" % (self._get_item_label(result._result),)
self._clean_results(result._result, result._task.action)
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=color)
def v2_runner_item_on_failed(self, result):
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
self._clean_results(result._result, result._task.action)
self._handle_exception(result._result)
msg = "failed: "
if delegated_vars:
msg += "[%s -> %s]" % (result._host.get_name(), delegated_vars['ansible_host'])
else:
msg += "[%s]" % (result._host.get_name())
self._handle_warnings(result._result)
self._display.display(msg + " (item=%s) => %s" % (self._get_item_label(result._result), self._dump_results(result._result)), color=C.COLOR_ERROR)
def v2_runner_item_on_skipped(self, result):
if self.display_skipped_hosts:
if self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._clean_results(result._result, result._task.action)
msg = "skipping: [%s] => (item=%s) " % (result._host.get_name(), self._get_item_label(result._result))
if self._run_is_verbose(result):
msg += " => %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_include(self, included_file):
msg = 'included: %s for %s' % (included_file._filename, ", ".join([h.name for h in included_file._hosts]))
label = self._get_item_label(included_file._vars)
if label:
msg += " => (item=%s)" % label
self._display.display(msg, color=C.COLOR_SKIP)
def v2_playbook_on_stats(self, stats):
self._display.banner("PLAY RECAP")
hosts = sorted(stats.processed.keys())
for h in hosts:
t = stats.summarize(h)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], C.COLOR_OK),
colorize(u'changed', t['changed'], C.COLOR_CHANGED),
colorize(u'unreachable', t['unreachable'], C.COLOR_UNREACHABLE),
colorize(u'failed', t['failures'], C.COLOR_ERROR),
colorize(u'skipped', t['skipped'], C.COLOR_SKIP),
colorize(u'rescued', t['rescued'], C.COLOR_OK),
colorize(u'ignored', t['ignored'], C.COLOR_WARN),
),
screen_only=True
)
self._display.display(
u"%s : %s %s %s %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None),
colorize(u'skipped', t['skipped'], None),
colorize(u'rescued', t['rescued'], None),
colorize(u'ignored', t['ignored'], None),
),
log_only=True
)
self._display.display("", screen_only=True)
# print custom stats if required
if stats.custom and self.show_custom_stats:
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == '_run':
continue
self._display.display('\t%s: %s' % (k, self._dump_results(stats.custom[k], indent=1).replace('\n', '')))
# print per run custom stats
if '_run' in stats.custom:
self._display.display("", screen_only=True)
self._display.display('\tRUN: %s' % self._dump_results(stats.custom['_run'], indent=1).replace('\n', ''))
self._display.display("", screen_only=True)
if context.CLIARGS['check'] and self.check_mode_markers:
self._display.banner("DRY RUN")
def v2_playbook_on_start(self, playbook):
if self._display.verbosity > 1:
from os.path import basename
self._display.banner("PLAYBOOK: %s" % basename(playbook._file_name))
# show CLI arguments
if self._display.verbosity > 3:
if context.CLIARGS.get('args'):
self._display.display('Positional arguments: %s' % ' '.join(context.CLIARGS['args']),
color=C.COLOR_VERBOSE, screen_only=True)
for argument in (a for a in context.CLIARGS if a != 'args'):
val = context.CLIARGS[argument]
if val:
self._display.display('%s: %s' % (argument, val), color=C.COLOR_VERBOSE, screen_only=True)
if context.CLIARGS['check'] and self.check_mode_markers:
self._display.banner("DRY RUN")
def v2_runner_retry(self, result):
task_name = result.task_name or result._task
msg = "FAILED - RETRYING: %s (%d retries left)." % (task_name, result._result['retries'] - result._result['attempts'])
if self._run_is_verbose(result, verbosity=2):
msg += "Result was: %s" % self._dump_results(result._result)
self._display.display(msg, color=C.COLOR_DEBUG)
def v2_runner_on_async_poll(self, result):
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
started = result._result.get('started')
finished = result._result.get('finished')
self._display.display(
'ASYNC POLL on %s: jid=%s started=%s finished=%s' % (host, jid, started, finished),
color=C.COLOR_DEBUG
)
def v2_playbook_on_notify(self, handler, host):
if self._display.verbosity > 1:
self._display.display("NOTIFIED HANDLER %s for %s" % (handler.get_name(), host), color=C.COLOR_VERBOSE, screen_only=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
lib/ansible/plugins/doc_fragments/default_callback.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
DOCUMENTATION = r'''
options:
display_skipped_hosts:
name: Show skipped hosts
description: "Toggle to control displaying skipped task/host results in a task"
type: bool
default: yes
env:
- name: DISPLAY_SKIPPED_HOSTS
deprecated:
why: environment variables without "ANSIBLE_" prefix are deprecated
version: "2.12"
alternatives: the "ANSIBLE_DISPLAY_SKIPPED_HOSTS" environment variable
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- key: display_skipped_hosts
section: defaults
display_ok_hosts:
name: Show 'ok' hosts
description: "Toggle to control displaying 'ok' task/host results in a task"
type: bool
default: yes
env:
- name: ANSIBLE_DISPLAY_OK_HOSTS
ini:
- key: display_ok_hosts
section: defaults
version_added: '2.7'
display_failed_stderr:
name: Use STDERR for failed and unreachable tasks
description: "Toggle to control whether failed and unreachable tasks are displayed to STDERR (vs. STDOUT)"
type: bool
default: no
env:
- name: ANSIBLE_DISPLAY_FAILED_STDERR
ini:
- key: display_failed_stderr
section: defaults
version_added: '2.7'
show_custom_stats:
name: Show custom stats
description: 'This adds the custom stats set via the set_stats plugin to the play recap'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_CUSTOM_STATS
ini:
- key: show_custom_stats
section: defaults
show_per_host_start:
name: Show per host task start
description: 'This adds output that shows when a task is started to execute for each host'
type: bool
default: no
env:
- name: ANSIBLE_SHOW_PER_HOST_START
ini:
- key: show_per_host_start
section: defaults
version_added: '2.9'
check_mode_markers:
name: Show markers when running in check mode
description:
- Toggle to control displaying markers when running in check mode.
- "The markers are C(DRY RUN) at the beggining and ending of playbook execution (when calling C(ansible-playbook --check))
and C(CHECK MODE) as a suffix at every play and task that is run in check mode."
type: bool
default: no
version_added: '2.9'
env:
- name: ANSIBLE_CHECK_MODE_MARKERS
ini:
- key: check_mode_markers
section: defaults
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
test/integration/targets/callback_default/callback_default.out.display_path_on_failure.stderr
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
test/integration/targets/callback_default/callback_default.out.display_path_on_failure.stdout
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,625 |
Add Filename and Line number to ansible_failed_task
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I'd like to see variables added to ansible_failed_task which represent the filename and line of the task that failed. This information is available inside the JUnit callback module already, but would be helpful to expose for better information in a rescue block notice.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible_failed_task
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The name of the task is nice, but in large projects may not be specific enough to hunt though and find the file/line to troubleshoot. This is even worse if the names of tasks aren't unique. This provide something like a slack task in a rescue block the ability to inform where the fault is possible to have come from.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64625
|
https://github.com/ansible/ansible/pull/73260
|
ca448f7c350fae8080dcfec648342d2fc8837da0
|
7d18ea5e93ccccfc415328430898c8d06e325f87
| 2019-11-08T23:26:26Z |
python
| 2021-02-09T17:43:59Z |
test/integration/targets/callback_default/runme.sh
|
#!/usr/bin/env bash
# This test compares "known good" output with various settings against output
# with the current code. It's brittle by nature, but this is probably the
# "best" approach possible.
#
# Notes:
# * options passed to this script (such as -v) are ignored, as they would change
# the output and break the test
# * the number of asterisks after a "banner" differs depending on the number of
# columns on the TTY, so we must adjust the columns for the current session
# for consistency
set -eux
run_test() {
local testname=$1
# outout was recorded w/o cowsay, ensure we reproduce the same
export ANSIBLE_NOCOWS=1
# The shenanigans with redirection and 'tee' are to capture STDOUT and
# STDERR separately while still displaying both to the console
{ ansible-playbook -i inventory test.yml \
> >(set +x; tee "${OUTFILE}.${testname}.stdout"); } \
2> >(set +x; tee "${OUTFILE}.${testname}.stderr" >&2)
# Scrub deprication warning that shows up in Python 2.6 on CentOS 6
sed -i -e '/RandomPool_DeprecationWarning/d' "${OUTFILE}.${testname}.stderr"
sed -i -e 's/included: .*\/test\/integration/included: ...\/test\/integration/g' "${OUTFILE}.${testname}.stdout"
sed -i -e 's/@@ -1,1 +1,1 @@/@@ -1 +1 @@/g' "${OUTFILE}.${testname}.stdout"
sed -i -e 's/: .*\/test_diff\.txt/: ...\/test_diff.txt/g' "${OUTFILE}.${testname}.stdout"
diff -u "${ORIGFILE}.${testname}.stdout" "${OUTFILE}.${testname}.stdout" || diff_failure
diff -u "${ORIGFILE}.${testname}.stderr" "${OUTFILE}.${testname}.stderr" || diff_failure
}
run_test_dryrun() {
local testname=$1
# optional, pass --check to run a dry run
local chk=${2:-}
# outout was recorded w/o cowsay, ensure we reproduce the same
export ANSIBLE_NOCOWS=1
# This needed to satisfy shellcheck that can not accept unquoted variable
cmd="ansible-playbook -i inventory ${chk} test_dryrun.yml"
# The shenanigans with redirection and 'tee' are to capture STDOUT and
# STDERR separately while still displaying both to the console
{ $cmd \
> >(set +x; tee "${OUTFILE}.${testname}.stdout"); } \
2> >(set +x; tee "${OUTFILE}.${testname}.stderr" >&2)
# Scrub deprication warning that shows up in Python 2.6 on CentOS 6
sed -i -e '/RandomPool_DeprecationWarning/d' "${OUTFILE}.${testname}.stderr"
diff -u "${ORIGFILE}.${testname}.stdout" "${OUTFILE}.${testname}.stdout" || diff_failure
diff -u "${ORIGFILE}.${testname}.stderr" "${OUTFILE}.${testname}.stderr" || diff_failure
}
diff_failure() {
if [[ $INIT = 0 ]]; then
echo "FAILURE...diff mismatch!"
exit 1
fi
}
cleanup() {
if [[ $INIT = 0 ]]; then
rm -rf "${OUTFILE}.*"
fi
if [[ -f "${BASEFILE}.unreachable.stdout" ]]; then
rm -rf "${BASEFILE}.unreachable.stdout"
fi
if [[ -f "${BASEFILE}.unreachable.stderr" ]]; then
rm -rf "${BASEFILE}.unreachable.stderr"
fi
# Restore TTY cols
if [[ -n ${TTY_COLS:-} ]]; then
stty cols "${TTY_COLS}"
fi
}
adjust_tty_cols() {
if [[ -t 1 ]]; then
# Preserve existing TTY cols
TTY_COLS=$( stty -a | grep -Eo '; columns [0-9]+;' | cut -d';' -f2 | cut -d' ' -f3 )
# Override TTY cols to make comparing ansible-playbook output easier
# This value matches the default in the code when there is no TTY
stty cols 79
fi
}
BASEFILE=callback_default.out
ORIGFILE="${BASEFILE}"
OUTFILE="${BASEFILE}.new"
trap 'cleanup' EXIT
# The --init flag will (re)generate the "good" output files used by the tests
INIT=0
if [[ ${1:-} == "--init" ]]; then
shift
OUTFILE=$ORIGFILE
INIT=1
fi
adjust_tty_cols
# Force the 'default' callback plugin, since that's what we're testing
export ANSIBLE_STDOUT_CALLBACK=default
# Disable color in output for consistency
export ANSIBLE_FORCE_COLOR=0
export ANSIBLE_NOCOLOR=1
# Default settings
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=1
export ANSIBLE_DISPLAY_OK_HOSTS=1
export ANSIBLE_DISPLAY_FAILED_STDERR=0
export ANSIBLE_CHECK_MODE_MARKERS=0
run_test default
# Hide skipped
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=0
run_test hide_skipped
# Hide skipped/ok
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=0
export ANSIBLE_DISPLAY_OK_HOSTS=0
run_test hide_skipped_ok
# Hide ok
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=1
export ANSIBLE_DISPLAY_OK_HOSTS=0
run_test hide_ok
# Failed to stderr
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=1
export ANSIBLE_DISPLAY_OK_HOSTS=1
export ANSIBLE_DISPLAY_FAILED_STDERR=1
run_test failed_to_stderr
# Default settings with unreachable tasks
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=1
export ANSIBLE_DISPLAY_OK_HOSTS=1
export ANSIBLE_DISPLAY_FAILED_STDERR=1
export ANSIBLE_TIMEOUT=1
# Check if UNREACHBLE is available in stderr
set +e
ansible-playbook -i inventory test_2.yml > >(set +x; tee "${BASEFILE}.unreachable.stdout";) 2> >(set +x; tee "${BASEFILE}.unreachable.stderr" >&2) || true
set -e
if test "$(grep -c 'UNREACHABLE' "${BASEFILE}.unreachable.stderr")" -ne 1; then
echo "Test failed"
exit 1
fi
## DRY RUN tests
#
# Default settings with dry run tasks
export ANSIBLE_DISPLAY_SKIPPED_HOSTS=1
export ANSIBLE_DISPLAY_OK_HOSTS=1
export ANSIBLE_DISPLAY_FAILED_STDERR=1
# Enable Check mode markers
export ANSIBLE_CHECK_MODE_MARKERS=1
# Test the wet run with check markers
run_test_dryrun check_markers_wet
# Test the dry run with check markers
run_test_dryrun check_markers_dry --check
# Disable Check mode markers
export ANSIBLE_CHECK_MODE_MARKERS=0
# Test the wet run without check markers
run_test_dryrun check_nomarkers_wet
# Test the dry run without check markers
run_test_dryrun check_nomarkers_dry --check
# Make sure implicit meta tasks are not printed
ansible-playbook -i host1,host2 no_implicit_meta_banners.yml > meta_test.out
cat meta_test.out
[ "$(grep -c 'TASK \[meta\]' meta_test.out)" -eq 0 ]
rm -f meta_test.out
# Ensure free/host_pinned non-lockstep strategies display correctly
diff -u callback_default.out.free.stdout <(ANSIBLE_STRATEGY=free ansible-playbook -i inventory test_non_lockstep.yml 2>/dev/null)
diff -u callback_default.out.host_pinned.stdout <(ANSIBLE_STRATEGY=host_pinned ansible-playbook -i inventory test_non_lockstep.yml 2>/dev/null)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,775 |
The user module cannot set password expiration
|
##### SUMMARY
Hi,
I would like to change the 'Password expires:' value ( like chage -M -1 username ) when creating a user. Is it possible to add a new parameter for example password_expire_max and password_expire_min where this value could be set?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user_module - https://docs.ansible.com/ansible/latest/modules/user_module.html
|
https://github.com/ansible/ansible/issues/68775
|
https://github.com/ansible/ansible/pull/69531
|
bf10bb370ba1c813f7d8d9eea73c3b50b1c19c19
|
4344607d7d105e264a0edce19f63041158ae9cc7
| 2020-04-08T15:36:22Z |
python
| 2021-02-09T21:41:15Z |
changelogs/fragments/69531_user_password_expire.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,775 |
The user module cannot set password expiration
|
##### SUMMARY
Hi,
I would like to change the 'Password expires:' value ( like chage -M -1 username ) when creating a user. Is it possible to add a new parameter for example password_expire_max and password_expire_min where this value could be set?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user_module - https://docs.ansible.com/ansible/latest/modules/user_module.html
|
https://github.com/ansible/ansible/issues/68775
|
https://github.com/ansible/ansible/pull/69531
|
bf10bb370ba1c813f7d8d9eea73c3b50b1c19c19
|
4344607d7d105e264a0edce19f63041158ae9cc7
| 2020-04-08T15:36:22Z |
python
| 2021-02-09T21:41:15Z |
lib/ansible/modules/user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(ansible.windows.win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to. When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
type: list
elements: str
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- See notes for details on how other operating systems determine the default shell by
the underlying tool.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a disabled account on Linux systems, set this to C('!') or C('*').
- To create a disabled account on OpenBSD, set this to C('*************').
- See U(https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
type: int
default: default set by ssh-keygen
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time by specifying a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (C(usermod -L), C(usermod -U), C(pw lock)).
- Implementation differs by platform. This option does not always mean the user cannot login using other methods.
- This option does not disable the user, only lock the password.
- This must be set to C(False) in order to unlock a currently locked password. The absence of this parameter will not unlock a password.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local users
(in other words, it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
- Supports C(check_mode).
seealso:
- module: ansible.posix.authorized_key
- module: ansible.builtin.group
- module: ansible.windows.win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
ansible.builtin.user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
ansible.builtin.user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
ansible.builtin.user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
ansible.builtin.user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
ansible.builtin.user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
ansible.builtin.user:
name: james18
expires: -1
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups.
returned: When state is C(present) and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name.
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory.
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted.
returned: When I(state) is C(absent) and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member.
returned: When I(groups) is not empty and I(state) is C(present)
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory."
returned: When I(state) is C(present)
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory.
returned: When I(state) is C(present) and user exists
type: bool
sample: False
name:
description: User account name.
returned: always
type: str
sample: asmith
password:
description: Masked value of the password.
returned: When I(state) is C(present) and I(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account.
returned: When I(state) is C(absent) and user exists
type: bool
sample: True
shell:
description: User login shell.
returned: When I(state) is C(present)
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file.
returned: When I(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands.
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands.
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account.
returned: When I(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account.
returned: When I(uid) is passed to the module
type: int
sample: 1044
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
import math
from ansible.module_utils import distro
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow'
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(User)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
if self.groups is None and self.append:
# Change the argument_spec in 2.14 and remove this warning
# required_by={'append': ['groups']}
module.warn("'append' is set, but no 'groups' are specified. Use 'groups' for appending new groups."
"This will change to an error in Ansible 2.14.")
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force and not self.local:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lchage_cmd = self.module.get_bin_path('lchage', True)
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release <= 5 or self.local:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
if not self.local:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist and create_home is True first create the parent directory
# since useradd cannot create it.
if self.create_home:
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None and not self.local:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
if self.password_lock:
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or rc != 0:
return (rc, out, err)
if self.expires is not None:
if self.expires < time.gmtime(0):
lexpires = -1
else:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if self.groups is None or len(self.groups) == 0:
return (rc, out, err)
for add_group in groups:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
lgroupmod_cmd = self.module.get_bin_path('lgroupmod', True)
lgroupmod_add = set()
lgroupmod_del = set()
lchage_cmd = self.module.get_bin_path('lchage', True)
lexpires = None
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if self.local:
if self.append:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set()
else:
lgroupmod_add = set(groups).difference(current_groups)
lgroupmod_del = set(current_groups).difference(groups)
else:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
if self.local:
lexpires = -1
else:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
if self.local:
# Convert seconds since Epoch to days since Epoch
lexpires = int(math.floor(self.module.params['expires'])) // 86400
else:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1].lstrip('!') != self.password.lstrip('!'):
# Remove options that are mutually exclusive with -p
cmd = [c for c in cmd if c not in ['-U', '-L']]
cmd.append('-p')
if self.password_lock:
# Lock the account and set the hash in a single command
cmd.append('!%s' % self.password)
else:
cmd.append(self.password)
(rc, out, err) = (None, '', '')
# skip if no usermod changes to be made
if len(cmd) > 1:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if not self.local or not (rc is None or rc == 0):
return (rc, out, err)
if lexpires is not None:
(rc, _out, _err) = self.execute_command([lchage_cmd, '-E', to_native(lexpires), self.name])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
if len(lgroupmod_add) == 0 and len(lgroupmod_del) == 0:
return (rc, out, err)
for add_group in lgroupmod_add:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-M', self.name, add_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
for del_group in lgroupmod_del:
(rc, _out, _err) = self.execute_command([lgroupmod_cmd, '-m', self.name, del_group])
out += _out
err += _err
if rc != 0:
return (rc, out, err)
return (rc, out, err)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = 'C'
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1)
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def _handle_lock(self):
info = self.user_info()
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (None, '', '')
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
(rc, out, err) = (None, '', '')
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, _out, _err) = self.execute_command(cmd)
out += _out
err += _err
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1].lstrip('*LOCKED*') != self.password.lstrip('*LOCKED*'):
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
_rc, _out, _err = self.execute_command(cmd)
if rc is None:
rc = _rc
out += _out
err += _err
# we have to lock/unlock the password in a distinct command
_rc, _out, _err = self._handle_lock()
if rc is None:
rc = _rc
out += _out
err += _err
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
else:
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
else:
if len(lines) == 2:
return lines[1].strip()
else:
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _out, _err) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _out, _err) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, out, err, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-list', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _out, _err)
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, out, err)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != to_text(self.__dict__[field[0]]):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _out, _err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _out, _err) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
b_passwd = b''
b_expires = b''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
try:
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
except IndexError:
self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE)
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list', elements='str'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create'], no_log=False),
expires=dict(type='float'),
password_lock=dict(type='bool', no_log=False),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
),
supports_check_mode=True,
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home and user.create_home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,775 |
The user module cannot set password expiration
|
##### SUMMARY
Hi,
I would like to change the 'Password expires:' value ( like chage -M -1 username ) when creating a user. Is it possible to add a new parameter for example password_expire_max and password_expire_min where this value could be set?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user_module - https://docs.ansible.com/ansible/latest/modules/user_module.html
|
https://github.com/ansible/ansible/issues/68775
|
https://github.com/ansible/ansible/pull/69531
|
bf10bb370ba1c813f7d8d9eea73c3b50b1c19c19
|
4344607d7d105e264a0edce19f63041158ae9cc7
| 2020-04-08T15:36:22Z |
python
| 2021-02-09T21:41:15Z |
test/integration/targets/user/tasks/main.yml
|
# Test code for the user module.
# (c) 2017, James Tanner <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
- name: skip broken distros
meta: end_host
when: ansible_distribution == 'Alpine'
- import_tasks: test_create_user.yml
- import_tasks: test_create_system_user.yml
- import_tasks: test_create_user_uid.yml
- import_tasks: test_create_user_password.yml
- import_tasks: test_create_user_home.yml
- import_tasks: test_remove_user.yml
- import_tasks: test_no_home_fallback.yml
- import_tasks: test_expires.yml
- import_tasks: test_expires_new_account.yml
- import_tasks: test_expires_new_account_epoch_negative.yml
- import_tasks: test_shadow_backup.yml
- import_tasks: test_ssh_key_passphrase.yml
- import_tasks: test_password_lock.yml
- import_tasks: test_password_lock_new_user.yml
- import_tasks: test_local.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 68,775 |
The user module cannot set password expiration
|
##### SUMMARY
Hi,
I would like to change the 'Password expires:' value ( like chage -M -1 username ) when creating a user. Is it possible to add a new parameter for example password_expire_max and password_expire_min where this value could be set?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
user_module - https://docs.ansible.com/ansible/latest/modules/user_module.html
|
https://github.com/ansible/ansible/issues/68775
|
https://github.com/ansible/ansible/pull/69531
|
bf10bb370ba1c813f7d8d9eea73c3b50b1c19c19
|
4344607d7d105e264a0edce19f63041158ae9cc7
| 2020-04-08T15:36:22Z |
python
| 2021-02-09T21:41:15Z |
test/integration/targets/user/tasks/test_expires_min_max.yml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.