status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,438 |
hostname module systemd strategy using hostname
|
##### SUMMARY
On Fedora the systemd strategy is used, which is fine, but getting the current hostname uses the `hostname` command which may not always be present.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible 2.8.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/duck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/duck/.local/lib/python3.7/site-packages/ansible
executable location = /home/duck/.local/bin/ansible
python version = 3.7.4 (default, Jul 11 2019, 10:43:21) [GCC 8.3.0]
```
##### CONFIGURATION
```paste below
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = …
```
##### OS / ENVIRONMENT
The control node is using Debian unstable. I'm running molecule to run a test in Fedora 29.
##### STEPS TO REPRODUCE
The following is run on two test nodes, on Fedora 29 where it fails and one Centos & where it works fine.
```yaml
- name: Prepare hosts
hosts: all
tasks:
- name: set proper hostname
hostname:
name: "{{ ansible_nodename }}.example.com"
```
##### EXPECTED RESULTS
The hostname should be set without error.
##### ACTUAL RESULTS
The systemd strategy uses hostnamectl to get and set the current and permanent hostnames except in `get_current_hostname()` where is uses `hostname` instead of `hostnamectl --transient status`. As in a container this command is not necessarily present, it fails with the following error:
```paste below
The full traceback is:
File "/tmp/ansible_hostname_payload_gb0hppku/ansible_hostname_payload.zip/ansible/module_utils/basic.py", line 1974, in get_bin_path
bin_path = get_bin_path(arg, required, opt_dirs)
File "/tmp/ansible_hostname_payload_gb0hppku/ansible_hostname_payload.zip/ansible/module_utils/common/process.py", line 41, in get_bin_path
raise ValueError('Failed to find required executable %s in paths: %s' % (arg, os.pathsep.join(paths)))
fatal: [ansible-test-builder]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "ansible-test-builder.example.com"
}
},
"msg": "Failed to find required executable hostname in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
}
```
|
https://github.com/ansible/ansible/issues/59438
|
https://github.com/ansible/ansible/pull/59974
|
b0fd1043e186c79d9fcc0f0d592ea3324deef6b6
|
53aa258d78650317aae09328980801f9d338c0b5
| 2019-07-23T13:29:46Z |
python
| 2019-09-11T04:57:17Z |
changelogs/fragments/59438-hostname-use-hostnamectl.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,438 |
hostname module systemd strategy using hostname
|
##### SUMMARY
On Fedora the systemd strategy is used, which is fine, but getting the current hostname uses the `hostname` command which may not always be present.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
hostname
##### ANSIBLE VERSION
```paste below
ansible 2.8.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/duck/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/duck/.local/lib/python3.7/site-packages/ansible
executable location = /home/duck/.local/bin/ansible
python version = 3.7.4 (default, Jul 11 2019, 10:43:21) [GCC 8.3.0]
```
##### CONFIGURATION
```paste below
ANSIBLE_FORCE_COLOR(env: ANSIBLE_FORCE_COLOR) = True
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = …
```
##### OS / ENVIRONMENT
The control node is using Debian unstable. I'm running molecule to run a test in Fedora 29.
##### STEPS TO REPRODUCE
The following is run on two test nodes, on Fedora 29 where it fails and one Centos & where it works fine.
```yaml
- name: Prepare hosts
hosts: all
tasks:
- name: set proper hostname
hostname:
name: "{{ ansible_nodename }}.example.com"
```
##### EXPECTED RESULTS
The hostname should be set without error.
##### ACTUAL RESULTS
The systemd strategy uses hostnamectl to get and set the current and permanent hostnames except in `get_current_hostname()` where is uses `hostname` instead of `hostnamectl --transient status`. As in a container this command is not necessarily present, it fails with the following error:
```paste below
The full traceback is:
File "/tmp/ansible_hostname_payload_gb0hppku/ansible_hostname_payload.zip/ansible/module_utils/basic.py", line 1974, in get_bin_path
bin_path = get_bin_path(arg, required, opt_dirs)
File "/tmp/ansible_hostname_payload_gb0hppku/ansible_hostname_payload.zip/ansible/module_utils/common/process.py", line 41, in get_bin_path
raise ValueError('Failed to find required executable %s in paths: %s' % (arg, os.pathsep.join(paths)))
fatal: [ansible-test-builder]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "ansible-test-builder.example.com"
}
},
"msg": "Failed to find required executable hostname in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
}
```
|
https://github.com/ansible/ansible/issues/59438
|
https://github.com/ansible/ansible/pull/59974
|
b0fd1043e186c79d9fcc0f0d592ea3324deef6b6
|
53aa258d78650317aae09328980801f9d338c0b5
| 2019-07-23T13:29:46Z |
python
| 2019-09-11T04:57:17Z |
lib/ansible/modules/system/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname, supports most OSs/Distributions, including those using systemd.
- Note, this module does *NOT* modify C(/etc/hosts). You need to modify it yourself using other modules like template or replace.
- Windows, HP-UX and AIX are not currently supported.
options:
name:
description:
- Name of the host
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, specially with containers as they can present misleading information.
choices: ['generic', 'debian','sles', 'redhat', 'alpine', 'systemd', 'openrc', 'openbsd', 'solaris', 'freebsd']
version_added: '2.9'
'''
EXAMPLES = '''
- hostname:
name: web01
'''
import os
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
get_platform,
load_platform_subclass,
)
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils._text import to_native
STRATS = {'generic': 'Generic', 'debian': 'Debian', 'sles': 'SLES', 'redhat': 'RedHat', 'alpine': 'Alpine',
'systemd': 'Systemd', 'openrc': 'OpenRC', 'openbsd': 'OpenBSD', 'solaris': 'Solaris', 'freebsd': 'FreeBSD'}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
platform = get_platform()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (platform, distribution)
else:
msg_platform = platform
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
return load_platform_subclass(Hostname, args, kwargs)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class GenericStrategy(object):
"""
This is a generic Hostname manipulation strategy class.
A subclass may wish to override some or all of these methods.
- get_current_hostname()
- get_permanent_hostname()
- set_current_hostname(name)
- set_permanent_hostname(name)
"""
def __init__(self, module):
self.module = module
self.hostname_cmd = self.module.get_bin_path('hostname', True)
self.changed = False
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class DebianStrategy(GenericStrategy):
"""
This is a Debian family Hostname manipulation strategy class - it edits
the /etc/hostname file.
"""
HOSTNAME_FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SLESStrategy(GenericStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
HOSTNAME_FILE = '/etc/HOSTNAME'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class RedHatStrategy(GenericStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
finally:
f.close()
if not found:
lines.append("HOSTNAME=%s\n" % name)
f = open(self.NETWORK_FILE, 'w+')
try:
f.writelines(lines)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class AlpineStrategy(GenericStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
HOSTNAME_FILE = '/etc/hostname'
def update_current_and_permanent_hostname(self):
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, '-F', self.HOSTNAME_FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(GenericStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
def get_current_hostname(self):
cmd = ['hostname']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = ['hostnamectl', '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = ['hostnamectl', '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = ['hostnamectl', '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = ['hostnamectl', '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(GenericStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class OpenBSDStrategy(GenericStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
HOSTNAME_FILE = '/etc/myname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SolarisStrategy(GenericStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(GenericStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/rc.conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("hostname=temporarystub\n")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = DebianStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = DebianStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = DebianStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = DebianStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = DebianStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = DebianStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = DebianStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = DebianStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = DebianStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = DebianStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,096 |
--all flag with ansible-test coverage throws a traceback
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
traceback with --all option with ansible-test coverage
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
/bin/ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
stable 2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run coverage with below comand
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test coverage html --all
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No traceback should occure
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-test", line 28, in <module>
main()
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-test", line 24, in main
cli_main()
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cli.py", line 128, in main
args.func(config)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 343, in command_coverage_html
output_files = command_coverage_combine(args)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 73, in command_coverage_combine
paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 182, in _command_coverage_combine_python
updated.write_file(output_file)
File "/home/abehl/work/src/anshul_ansible/ansible/venv3/lib/python3.7/site-packages/coverage/data.py", line 468, in write_file
self.write_fileobj(fdata)
File "/home/abehl/work/src/anshul_ansible/ansible/venv3/lib/python3.7/site-packages/coverage/data.py", line 461, in write_fileobj
json.dump(file_data, file_obj, separators=(',', ':'))
File "/usr/lib64/python3.7/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 376, in _iterencode_dict
raise TypeError(f'keys must be str, int, float, bool or None, '
TypeError: keys must be str, int, float, bool or None, not tuple
```
|
https://github.com/ansible/ansible/issues/62096
|
https://github.com/ansible/ansible/pull/62115
|
53aa258d78650317aae09328980801f9d338c0b5
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
| 2019-09-10T19:41:03Z |
python
| 2019-09-11T05:12:38Z |
changelogs/fragments/62096-test-coverage-all.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,096 |
--all flag with ansible-test coverage throws a traceback
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
traceback with --all option with ansible-test coverage
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
/bin/ansible-test
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
stable 2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run coverage with below comand
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible-test coverage html --all
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No traceback should occure
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-test", line 28, in <module>
main()
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-test", line 24, in main
cli_main()
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cli.py", line 128, in main
args.func(config)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 343, in command_coverage_html
output_files = command_coverage_combine(args)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 73, in command_coverage_combine
paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args)
File "/home/abehl/work/src/anshul_ansible/ansible/test/lib/ansible_test/_internal/cover.py", line 182, in _command_coverage_combine_python
updated.write_file(output_file)
File "/home/abehl/work/src/anshul_ansible/ansible/venv3/lib/python3.7/site-packages/coverage/data.py", line 468, in write_file
self.write_fileobj(fdata)
File "/home/abehl/work/src/anshul_ansible/ansible/venv3/lib/python3.7/site-packages/coverage/data.py", line 461, in write_fileobj
json.dump(file_data, file_obj, separators=(',', ':'))
File "/usr/lib64/python3.7/json/__init__.py", line 179, in dump
for chunk in iterable:
File "/usr/lib64/python3.7/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/usr/lib64/python3.7/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/lib64/python3.7/json/encoder.py", line 376, in _iterencode_dict
raise TypeError(f'keys must be str, int, float, bool or None, '
TypeError: keys must be str, int, float, bool or None, not tuple
```
|
https://github.com/ansible/ansible/issues/62096
|
https://github.com/ansible/ansible/pull/62115
|
53aa258d78650317aae09328980801f9d338c0b5
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
| 2019-09-10T19:41:03Z |
python
| 2019-09-11T05:12:38Z |
test/lib/ansible_test/_internal/cover.py
|
"""Code coverage utilities."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import re
import time
from xml.etree.ElementTree import (
Comment,
Element,
SubElement,
tostring,
)
from xml.dom import (
minidom,
)
from . import types as t
from .target import (
walk_module_targets,
walk_compile_targets,
walk_powershell_targets,
)
from .util import (
display,
ApplicationError,
common_environment,
ANSIBLE_TEST_DATA_ROOT,
to_text,
make_dirs,
)
from .util_common import (
intercept_command,
ResultType,
write_text_test_results,
write_json_test_results,
)
from .config import (
CoverageConfig,
CoverageReportConfig,
)
from .env import (
get_ansible_version,
)
from .executor import (
Delegate,
install_command_requirements,
)
from .data import (
data_context,
)
COVERAGE_GROUPS = ('command', 'target', 'environment', 'version')
COVERAGE_CONFIG_PATH = os.path.join(ANSIBLE_TEST_DATA_ROOT, 'coveragerc')
COVERAGE_OUTPUT_FILE_NAME = 'coverage'
def command_coverage_combine(args):
"""Patch paths in coverage files and merge into a single file.
:type args: CoverageConfig
:rtype: list[str]
"""
paths = _command_coverage_combine_powershell(args) + _command_coverage_combine_python(args)
for path in paths:
display.info('Generated combined output: %s' % path, verbosity=1)
return paths
def _command_coverage_combine_python(args):
"""
:type args: CoverageConfig
:rtype: list[str]
"""
coverage = initialize_coverage(args)
modules = dict((target.module, target.path) for target in list(walk_module_targets()) if target.path.endswith('.py'))
coverage_dir = ResultType.COVERAGE.path
coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir)
if '=coverage.' in f and '=python' in f]
counter = 0
sources = _get_coverage_targets(args, walk_compile_targets)
groups = _build_stub_groups(args, sources, lambda line_count: set())
if data_context().content.collection:
collection_search_re = re.compile(r'/%s/' % data_context().content.collection.directory)
collection_sub_re = re.compile(r'^.*?/%s/' % data_context().content.collection.directory)
else:
collection_search_re = None
collection_sub_re = None
for coverage_file in coverage_files:
counter += 1
display.info('[%4d/%4d] %s' % (counter, len(coverage_files), coverage_file), verbosity=2)
original = coverage.CoverageData()
group = get_coverage_group(args, coverage_file)
if group is None:
display.warning('Unexpected name for coverage file: %s' % coverage_file)
continue
if os.path.getsize(coverage_file) == 0:
display.warning('Empty coverage file: %s' % coverage_file)
continue
try:
original.read_file(coverage_file)
except Exception as ex: # pylint: disable=locally-disabled, broad-except
display.error(u'%s' % ex)
continue
for filename in original.measured_files():
arcs = set(original.arcs(filename) or [])
if not arcs:
# This is most likely due to using an unsupported version of coverage.
display.warning('No arcs found for "%s" in coverage file: %s' % (filename, coverage_file))
continue
filename = _sanitise_filename(filename, modules=modules, collection_search_re=collection_search_re,
collection_sub_re=collection_sub_re)
if not filename:
continue
if group not in groups:
groups[group] = {}
arc_data = groups[group]
if filename not in arc_data:
arc_data[filename] = set()
arc_data[filename].update(arcs)
output_files = []
invalid_path_count = 0
invalid_path_chars = 0
coverage_file = os.path.join(ResultType.COVERAGE.path, COVERAGE_OUTPUT_FILE_NAME)
for group in sorted(groups):
arc_data = groups[group]
updated = coverage.CoverageData()
for filename in arc_data:
if not os.path.isfile(filename):
if collection_search_re and collection_search_re.search(filename) and os.path.basename(filename) == '__init__.py':
# the collection loader uses implicit namespace packages, so __init__.py does not need to exist on disk
continue
invalid_path_count += 1
invalid_path_chars += len(filename)
if args.verbosity > 1:
display.warning('Invalid coverage path: %s' % filename)
continue
updated.add_arcs({filename: list(arc_data[filename])})
if args.all:
updated.add_arcs(dict((source, []) for source in sources))
if not args.explain:
output_file = coverage_file + group
updated.write_file(output_file)
output_files.append(output_file)
if invalid_path_count > 0:
display.warning('Ignored %d characters from %d invalid coverage path(s).' % (invalid_path_chars, invalid_path_count))
return sorted(output_files)
def _get_coverage_targets(args, walk_func):
"""
:type args: CoverageConfig
:type walk_func: Func
:rtype: list[tuple[str, int]]
"""
sources = []
if args.all or args.stub:
# excludes symlinks of regular files to avoid reporting on the same file multiple times
# in the future it would be nice to merge any coverage for symlinks into the real files
for target in walk_func(include_symlinks=False):
target_path = os.path.abspath(target.path)
with open(target_path, 'r') as target_fd:
target_lines = len(target_fd.read().splitlines())
sources.append((target_path, target_lines))
sources.sort()
return sources
def _build_stub_groups(args, sources, default_stub_value):
"""
:type args: CoverageConfig
:type sources: List[tuple[str, int]]
:type default_stub_value: Func[int]
:rtype: dict
"""
groups = {}
if args.stub:
stub_group = []
stub_groups = [stub_group]
stub_line_limit = 500000
stub_line_count = 0
for source, source_line_count in sources:
stub_group.append((source, source_line_count))
stub_line_count += source_line_count
if stub_line_count > stub_line_limit:
stub_line_count = 0
stub_group = []
stub_groups.append(stub_group)
for stub_index, stub_group in enumerate(stub_groups):
if not stub_group:
continue
groups['=stub-%02d' % (stub_index + 1)] = dict((source, default_stub_value(line_count))
for source, line_count in stub_group)
return groups
def _sanitise_filename(filename, modules=None, collection_search_re=None, collection_sub_re=None):
"""
:type filename: str
:type modules: dict | None
:type collection_search_re: Pattern | None
:type collection_sub_re: Pattern | None
:rtype: str | None
"""
ansible_path = os.path.abspath('lib/ansible/') + '/'
root_path = data_context().content.root + '/'
if modules is None:
modules = {}
if '/ansible_modlib.zip/ansible/' in filename:
# Rewrite the module_utils path from the remote host to match the controller. Ansible 2.6 and earlier.
new_name = re.sub('^.*/ansible_modlib.zip/ansible/', ansible_path, filename)
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif collection_search_re and collection_search_re.search(filename):
new_name = os.path.abspath(collection_sub_re.sub('', filename))
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif re.search(r'/ansible_[^/]+_payload\.zip/ansible/', filename):
# Rewrite the module_utils path from the remote host to match the controller. Ansible 2.7 and later.
new_name = re.sub(r'^.*/ansible_[^/]+_payload\.zip/ansible/', ansible_path, filename)
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif '/ansible_module_' in filename:
# Rewrite the module path from the remote host to match the controller. Ansible 2.6 and earlier.
module_name = re.sub('^.*/ansible_module_(?P<module>.*).py$', '\\g<module>', filename)
if module_name not in modules:
display.warning('Skipping coverage of unknown module: %s' % module_name)
return None
new_name = os.path.abspath(modules[module_name])
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif re.search(r'/ansible_[^/]+_payload(_[^/]+|\.zip)/__main__\.py$', filename):
# Rewrite the module path from the remote host to match the controller. Ansible 2.7 and later.
# AnsiballZ versions using zipimporter will match the `.zip` portion of the regex.
# AnsiballZ versions not using zipimporter will match the `_[^/]+` portion of the regex.
module_name = re.sub(r'^.*/ansible_(?P<module>[^/]+)_payload(_[^/]+|\.zip)/__main__\.py$',
'\\g<module>', filename).rstrip('_')
if module_name not in modules:
display.warning('Skipping coverage of unknown module: %s' % module_name)
return None
new_name = os.path.abspath(modules[module_name])
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif re.search('^(/.*?)?/root/ansible/', filename):
# Rewrite the path of code running on a remote host or in a docker container as root.
new_name = re.sub('^(/.*?)?/root/ansible/', root_path, filename)
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
elif '/.ansible/test/tmp/' in filename:
# Rewrite the path of code running from an integration test temporary directory.
new_name = re.sub(r'^.*/\.ansible/test/tmp/[^/]+/', root_path, filename)
display.info('%s -> %s' % (filename, new_name), verbosity=3)
filename = new_name
return filename
def command_coverage_report(args):
"""
:type args: CoverageReportConfig
"""
output_files = command_coverage_combine(args)
for output_file in output_files:
if args.group_by or args.stub:
display.info('>>> Coverage Group: %s' % ' '.join(os.path.basename(output_file).split('=')[1:]))
if output_file.endswith('-powershell'):
display.info(_generate_powershell_output_report(args, output_file))
else:
options = []
if args.show_missing:
options.append('--show-missing')
if args.include:
options.extend(['--include', args.include])
if args.omit:
options.extend(['--omit', args.omit])
run_coverage(args, output_file, 'report', options)
def command_coverage_html(args):
"""
:type args: CoverageConfig
"""
output_files = command_coverage_combine(args)
for output_file in output_files:
if output_file.endswith('-powershell'):
# coverage.py does not support non-Python files so we just skip the local html report.
display.info("Skipping output file %s in html generation" % output_file, verbosity=3)
continue
dir_name = os.path.join(ResultType.REPORTS.path, os.path.basename(output_file))
make_dirs(dir_name)
run_coverage(args, output_file, 'html', ['-i', '-d', dir_name])
display.info('HTML report generated: file:///%s' % os.path.join(dir_name, 'index.html'))
def command_coverage_xml(args):
"""
:type args: CoverageConfig
"""
output_files = command_coverage_combine(args)
for output_file in output_files:
xml_name = '%s.xml' % os.path.basename(output_file)
if output_file.endswith('-powershell'):
report = _generage_powershell_xml(output_file)
rough_string = tostring(report, 'utf-8')
reparsed = minidom.parseString(rough_string)
pretty = reparsed.toprettyxml(indent=' ')
write_text_test_results(ResultType.REPORTS, xml_name, pretty)
else:
xml_path = os.path.join(ResultType.REPORTS.path, xml_name)
make_dirs(ResultType.REPORTS.path)
run_coverage(args, output_file, 'xml', ['-i', '-o', xml_path])
def command_coverage_erase(args):
"""
:type args: CoverageConfig
"""
initialize_coverage(args)
coverage_dir = ResultType.COVERAGE.path
for name in os.listdir(coverage_dir):
if not name.startswith('coverage') and '=coverage.' not in name:
continue
path = os.path.join(coverage_dir, name)
if not args.explain:
os.remove(path)
def initialize_coverage(args):
"""
:type args: CoverageConfig
:rtype: coverage
"""
if args.delegate:
raise Delegate()
if args.requirements:
install_command_requirements(args)
try:
import coverage
except ImportError:
coverage = None
if not coverage:
raise ApplicationError('You must install the "coverage" python module to use this command.')
return coverage
def get_coverage_group(args, coverage_file):
"""
:type args: CoverageConfig
:type coverage_file: str
:rtype: str
"""
parts = os.path.basename(coverage_file).split('=', 4)
if len(parts) != 5 or not parts[4].startswith('coverage.'):
return None
names = dict(
command=parts[0],
target=parts[1],
environment=parts[2],
version=parts[3],
)
group = ''
for part in COVERAGE_GROUPS:
if part in args.group_by:
group += '=%s' % names[part]
return group
def _command_coverage_combine_powershell(args):
"""
:type args: CoverageConfig
:rtype: list[str]
"""
coverage_dir = ResultType.COVERAGE.path
coverage_files = [os.path.join(coverage_dir, f) for f in os.listdir(coverage_dir)
if '=coverage.' in f and '=powershell' in f]
def _default_stub_value(lines):
val = {}
for line in range(lines):
val[line] = 0
return val
counter = 0
sources = _get_coverage_targets(args, walk_powershell_targets)
groups = _build_stub_groups(args, sources, _default_stub_value)
for coverage_file in coverage_files:
counter += 1
display.info('[%4d/%4d] %s' % (counter, len(coverage_files), coverage_file), verbosity=2)
group = get_coverage_group(args, coverage_file)
if group is None:
display.warning('Unexpected name for coverage file: %s' % coverage_file)
continue
if os.path.getsize(coverage_file) == 0:
display.warning('Empty coverage file: %s' % coverage_file)
continue
try:
with open(coverage_file, 'rb') as original_fd:
coverage_run = json.loads(to_text(original_fd.read(), errors='replace'))
except Exception as ex: # pylint: disable=locally-disabled, broad-except
display.error(u'%s' % ex)
continue
for filename, hit_info in coverage_run.items():
if group not in groups:
groups[group] = {}
coverage_data = groups[group]
filename = _sanitise_filename(filename)
if not filename:
continue
if filename not in coverage_data:
coverage_data[filename] = {}
file_coverage = coverage_data[filename]
if not isinstance(hit_info, list):
hit_info = [hit_info]
for hit_entry in hit_info:
if not hit_entry:
continue
line_count = file_coverage.get(hit_entry['Line'], 0) + hit_entry['HitCount']
file_coverage[hit_entry['Line']] = line_count
output_files = []
invalid_path_count = 0
invalid_path_chars = 0
for group in sorted(groups):
coverage_data = groups[group]
for filename in coverage_data:
if not os.path.isfile(filename):
invalid_path_count += 1
invalid_path_chars += len(filename)
if args.verbosity > 1:
display.warning('Invalid coverage path: %s' % filename)
continue
if args.all:
# Add 0 line entries for files not in coverage_data
for source, source_line_count in sources:
if source in coverage_data:
continue
coverage_data[source] = _default_stub_value(source_line_count)
if not args.explain:
output_file = COVERAGE_OUTPUT_FILE_NAME + group + '-powershell'
write_json_test_results(ResultType.COVERAGE, output_file, coverage_data)
output_files.append(os.path.join(ResultType.COVERAGE.path, output_file))
if invalid_path_count > 0:
display.warning(
'Ignored %d characters from %d invalid coverage path(s).' % (invalid_path_chars, invalid_path_count))
return sorted(output_files)
def _generage_powershell_xml(coverage_file):
"""
:type coverage_file: str
:rtype: Element
"""
with open(coverage_file, 'rb') as coverage_fd:
coverage_info = json.loads(to_text(coverage_fd.read()))
content_root = data_context().content.root
is_ansible = data_context().content.is_ansible
packages = {}
for path, results in coverage_info.items():
filename = os.path.splitext(os.path.basename(path))[0]
if filename.startswith('Ansible.ModuleUtils'):
package = 'ansible.module_utils'
elif is_ansible:
package = 'ansible.modules'
else:
rel_path = path[len(content_root) + 1:]
plugin_type = "modules" if rel_path.startswith("plugins/modules") else "module_utils"
package = 'ansible_collections.%splugins.%s' % (data_context().content.collection.prefix, plugin_type)
if package not in packages:
packages[package] = {}
packages[package][path] = results
elem_coverage = Element('coverage')
elem_coverage.append(
Comment(' Generated by ansible-test from the Ansible project: https://www.ansible.com/ '))
elem_coverage.append(
Comment(' Based on https://raw.githubusercontent.com/cobertura/web/master/htdocs/xml/coverage-04.dtd '))
elem_sources = SubElement(elem_coverage, 'sources')
elem_source = SubElement(elem_sources, 'source')
elem_source.text = data_context().content.root
elem_packages = SubElement(elem_coverage, 'packages')
total_lines_hit = 0
total_line_count = 0
for package_name, package_data in packages.items():
lines_hit, line_count = _add_cobertura_package(elem_packages, package_name, package_data)
total_lines_hit += lines_hit
total_line_count += line_count
elem_coverage.attrib.update({
'branch-rate': '0',
'branches-covered': '0',
'branches-valid': '0',
'complexity': '0',
'line-rate': str(round(total_lines_hit / total_line_count, 4)) if total_line_count else "0",
'lines-covered': str(total_line_count),
'lines-valid': str(total_lines_hit),
'timestamp': str(int(time.time())),
'version': get_ansible_version(),
})
return elem_coverage
def _add_cobertura_package(packages, package_name, package_data):
"""
:type packages: SubElement
:type package_name: str
:type package_data: Dict[str, Dict[str, int]]
:rtype: Tuple[int, int]
"""
elem_package = SubElement(packages, 'package')
elem_classes = SubElement(elem_package, 'classes')
total_lines_hit = 0
total_line_count = 0
for path, results in package_data.items():
lines_hit = len([True for hits in results.values() if hits])
line_count = len(results)
total_lines_hit += lines_hit
total_line_count += line_count
elem_class = SubElement(elem_classes, 'class')
class_name = os.path.splitext(os.path.basename(path))[0]
if class_name.startswith("Ansible.ModuleUtils"):
class_name = class_name[20:]
content_root = data_context().content.root
filename = path
if filename.startswith(content_root):
filename = filename[len(content_root) + 1:]
elem_class.attrib.update({
'branch-rate': '0',
'complexity': '0',
'filename': filename,
'line-rate': str(round(lines_hit / line_count, 4)) if line_count else "0",
'name': class_name,
})
SubElement(elem_class, 'methods')
elem_lines = SubElement(elem_class, 'lines')
for number, hits in results.items():
elem_line = SubElement(elem_lines, 'line')
elem_line.attrib.update(
hits=str(hits),
number=str(number),
)
elem_package.attrib.update({
'branch-rate': '0',
'complexity': '0',
'line-rate': str(round(total_lines_hit / total_line_count, 4)) if total_line_count else "0",
'name': package_name,
})
return total_lines_hit, total_line_count
def _generate_powershell_output_report(args, coverage_file):
"""
:type args: CoverageReportConfig
:type coverage_file: str
:rtype: str
"""
with open(coverage_file, 'rb') as coverage_fd:
coverage_info = json.loads(to_text(coverage_fd.read()))
root_path = data_context().content.root + '/'
name_padding = 7
cover_padding = 8
file_report = []
total_stmts = 0
total_miss = 0
for filename in sorted(coverage_info.keys()):
hit_info = coverage_info[filename]
if filename.startswith(root_path):
filename = filename[len(root_path):]
if args.omit and filename in args.omit:
continue
if args.include and filename not in args.include:
continue
stmts = len(hit_info)
miss = len([c for c in hit_info.values() if c == 0])
name_padding = max(name_padding, len(filename) + 3)
total_stmts += stmts
total_miss += miss
cover = "{0}%".format(int((stmts - miss) / stmts * 100))
missing = []
current_missing = None
sorted_lines = sorted([int(x) for x in hit_info.keys()])
for idx, line in enumerate(sorted_lines):
hit = hit_info[str(line)]
if hit == 0 and current_missing is None:
current_missing = line
elif hit != 0 and current_missing is not None:
end_line = sorted_lines[idx - 1]
if current_missing == end_line:
missing.append(str(current_missing))
else:
missing.append('%s-%s' % (current_missing, end_line))
current_missing = None
if current_missing is not None:
end_line = sorted_lines[-1]
if current_missing == end_line:
missing.append(str(current_missing))
else:
missing.append('%s-%s' % (current_missing, end_line))
file_report.append({'name': filename, 'stmts': stmts, 'miss': miss, 'cover': cover, 'missing': missing})
if total_stmts == 0:
return ''
total_percent = '{0}%'.format(int((total_stmts - total_miss) / total_stmts * 100))
stmts_padding = max(8, len(str(total_stmts)))
miss_padding = max(7, len(str(total_miss)))
line_length = name_padding + stmts_padding + miss_padding + cover_padding
header = 'Name'.ljust(name_padding) + 'Stmts'.rjust(stmts_padding) + 'Miss'.rjust(miss_padding) + \
'Cover'.rjust(cover_padding)
if args.show_missing:
header += 'Lines Missing'.rjust(16)
line_length += 16
line_break = '-' * line_length
lines = ['%s%s%s%s%s' % (f['name'].ljust(name_padding), str(f['stmts']).rjust(stmts_padding),
str(f['miss']).rjust(miss_padding), f['cover'].rjust(cover_padding),
' ' + ', '.join(f['missing']) if args.show_missing else '')
for f in file_report]
totals = 'TOTAL'.ljust(name_padding) + str(total_stmts).rjust(stmts_padding) + \
str(total_miss).rjust(miss_padding) + total_percent.rjust(cover_padding)
report = '{0}\n{1}\n{2}\n{1}\n{3}'.format(header, line_break, "\n".join(lines), totals)
return report
def run_coverage(args, output_file, command, cmd): # type: (CoverageConfig, str, str, t.List[str]) -> None
"""Run the coverage cli tool with the specified options."""
env = common_environment()
env.update(dict(COVERAGE_FILE=output_file))
cmd = ['python', '-m', 'coverage', command, '--rcfile', COVERAGE_CONFIG_PATH] + cmd
intercept_command(args, target_name='coverage', env=env, cmd=cmd, disable_coverage=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/interfaces/interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node
class Interfaces(ConfigBase):
"""
The junos_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'interfaces',
]
def __init__(self, module):
super(Interfaces, self).__init__(module)
def get_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
interfaces_facts = facts['ansible_network_resources'].get('interfaces')
if not interfaces_facts:
return []
return interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
existing_interfaces_facts = self.get_interfaces_facts()
config_xmls = self.set_config(existing_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, [])
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_interfaces_facts = self.get_interfaces_facts()
result['before'] = existing_interfaces_facts
if result['changed']:
result['after'] = changed_interfaces_facts
return result
def set_config(self, existing_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('interfaces')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is replaced
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(want, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_overridden(self, want, have):
""" The xml configuration generator when state is overridden
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
interface_xmls_obj = []
# replace interface config with data in want
interface_xmls_obj.extend(self._state_replaced(want, have))
# delete interface config if interface in have not present in want
delete_obj = []
for have_obj in have:
for want_obj in want:
if have_obj['name'] == want_obj['name']:
break
else:
delete_obj.append(have_obj)
if delete_obj:
interface_xmls_obj.extend(self._state_deleted(delete_obj, have))
return interface_xmls_obj
def _state_merged(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
intf_xml = []
for config in want:
intf = build_root_xml_node('interface')
build_child_xml_node(intf, 'name', config['name'])
intf_fields = ['description', 'speed']
if not config['name'].startswith('fxp'):
intf_fields.append('mtu')
for field in intf_fields:
if config.get(field):
build_child_xml_node(intf, field, config[field])
if config.get('duplex'):
build_child_xml_node(intf, 'link-mode', config['duplex'])
if config.get('enable') is False:
build_child_xml_node(intf, 'disable')
holdtime = config.get('hold_time')
if holdtime:
holdtime_ele = build_child_xml_node(intf, 'hold-time')
for holdtime_field in ['up', 'down']:
build_child_xml_node(holdtime_ele, holdtime_field, holdtime.get(holdtime_field, ''))
intf_xml.append(intf)
return intf_xml
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current configuration
of the provided objects
"""
intf_xml = []
intf_obj = want
if not intf_obj:
# delete base interfaces attribute from all the existing interface
intf_obj = have
for config in intf_obj:
intf = build_root_xml_node('interface')
build_child_xml_node(intf, 'name', config['name'])
intf_fields = ['description']
if not config['name'].startswith('lo'):
intf_fields.append('speed')
if not any([config['name'].startswith('fxp'), config['name'].startswith('lo')]):
intf_fields.append('mtu')
for field in intf_fields:
build_child_xml_node(intf, field, None, {'delete': 'delete'})
if not config['name'].startswith('lo'):
build_child_xml_node(intf, 'link-mode', None, {'delete': 'delete'})
build_child_xml_node(intf, 'disable', None, {'delete': 'delete'})
holdtime_ele = build_child_xml_node(intf, 'hold-time')
for holdtime_field in ['up', 'down']:
build_child_xml_node(holdtime_ele, holdtime_field, None, {'delete': 'delete'})
intf_xml.append(intf)
return intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_l2_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.utils.utils import get_resource_config
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node, build_subtree
class L2_interfaces(ConfigBase):
"""
The junos_l2_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'l2_interfaces',
]
def __init__(self, module):
super(L2_interfaces, self).__init__(module)
def get_l2_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
l2_interfaces_facts = facts['ansible_network_resources'].get('l2_interfaces')
if not l2_interfaces_facts:
return []
return l2_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
existing_l2_interfaces_facts = self.get_l2_interfaces_facts()
config_xmls = self.set_config(existing_l2_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, [])
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_l2_interfaces_facts = self.get_l2_interfaces_facts()
result['before'] = existing_l2_interfaces_facts
if result['changed']:
result['after'] = changed_l2_interfaces_facts
return result
def set_config(self, existing_l2_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_l2_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('interfaces')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is replaced
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
l2_intf_xml = []
l2_intf_xml.extend(self._state_deleted(want, have))
l2_intf_xml.extend(self._state_merged(want, have))
return l2_intf_xml
def _state_overridden(self, want, have):
""" The xml configuration generator when state is overridden
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
l2_interface_xmls_obj = []
# replace interface config with data in want
l2_interface_xmls_obj.extend(self._state_replaced(want, have))
# delete interface config if interface in have not present in want
delete_obj = []
for have_obj in have:
for want_obj in want:
if have_obj['name'] == want_obj['name']:
break
else:
delete_obj.append(have_obj)
if delete_obj:
l2_interface_xmls_obj.extend(self._state_deleted(delete_obj, have))
return l2_interface_xmls_obj
def _state_merged(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
intf_xml = []
for config in want:
enhanced_layer = True
if config.get('enhanced_layer') is False:
enhanced_layer = False
mode = 'interface-mode' if enhanced_layer else 'port-mode'
intf = build_root_xml_node('interface')
build_child_xml_node(intf, 'name', config['name'])
unit_node = build_child_xml_node(intf, 'unit')
unit = config['unit'] if config['unit'] else '0'
build_child_xml_node(unit_node, 'name', unit)
eth_node = build_subtree(unit_node, 'family/ethernet-switching')
if config.get('access'):
vlan = config['access'].get('vlan')
if vlan:
build_child_xml_node(eth_node, mode, 'access')
vlan_node = build_child_xml_node(eth_node, 'vlan')
build_child_xml_node(vlan_node, 'members', vlan)
intf_xml.append(intf)
elif config.get('trunk'):
allowed_vlans = config['trunk'].get('allowed_vlans')
native_vlan = config['trunk'].get('native_vlan')
if allowed_vlans:
build_child_xml_node(eth_node, mode, 'trunk')
vlan_node = build_child_xml_node(eth_node, 'vlan')
for vlan in allowed_vlans:
build_child_xml_node(vlan_node, 'members', vlan)
if native_vlan:
build_child_xml_node(intf, 'native-vlan-id', native_vlan)
if allowed_vlans or native_vlan:
intf_xml.append(intf)
return intf_xml
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current configuration
of the provided objects
"""
l2_intf_xml = []
l2_intf_obj = want
config_filter = """
<configuration>
<interfaces/>
</configuration>
"""
data = get_resource_config(self._connection, config_filter=config_filter)
if not l2_intf_obj:
# delete l2 interfaces attribute from all the existing interface having l2 config
l2_intf_obj = have
for config in l2_intf_obj:
name = config['name']
enhanced_layer = True
l2_mode = data.xpath("configuration/interfaces/interface[name='%s']/unit/family/ethernet-switching/interface-mode" % name)
if not len(l2_mode):
l2_mode = data.xpath("configuration/interfaces/interface[name='%s']/unit/family/ethernet-switching/port-mode" % name)
enhanced_layer = False
if len(l2_mode):
mode = 'interface-mode' if enhanced_layer else 'port-mode'
intf = build_root_xml_node('interface')
build_child_xml_node(intf, 'name', name)
unit_node = build_child_xml_node(intf, 'unit')
unit = config['unit'] if config['unit'] else '0'
build_child_xml_node(unit_node, 'name', unit)
eth_node = build_subtree(unit_node, 'family/ethernet-switching')
build_child_xml_node(eth_node, mode, None, {'delete': 'delete'})
build_child_xml_node(eth_node, 'vlan', None, {'delete': 'delete'})
build_child_xml_node(intf, 'native-vlan-id', None, {'delete': 'delete'})
l2_intf_xml.append(intf)
return l2_intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/l3_interfaces/l3_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_l3_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import (
locked_config, load_config, commit_configuration, discard_changes,
tostring)
from ansible.module_utils.network.common.netconf import (build_root_xml_node,
build_child_xml_node)
class L3_interfaces(ConfigBase):
"""
The junos_l3_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'l3_interfaces',
]
def __init__(self, module):
super(L3_interfaces, self).__init__(module)
def get_l3_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(
self.gather_subset, self.gather_network_resources)
l3_interfaces_facts = facts['ansible_network_resources'].get(
'l3_interfaces')
if not l3_interfaces_facts:
return []
return l3_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
existing_interfaces_facts = self.get_l3_interfaces_facts()
config_xmls = self.set_config(existing_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, warnings)
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_interfaces_facts = self.get_l3_interfaces_facts()
result['before'] = existing_interfaces_facts
if result['changed']:
result['after'] = changed_interfaces_facts
result['warnings'] = warnings
return result
def set_config(self, existing_l3_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_l3_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current
configuration
to the desired configuration
"""
root = build_root_xml_node('interfaces')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _get_common_xml_node(self, name):
root_node = build_root_xml_node('interface')
build_child_xml_node(root_node, 'name', name)
intf_unit_node = build_child_xml_node(root_node, 'unit')
return root_node, intf_unit_node
def _state_replaced(self, want, have):
""" The xml generator when state is replaced
:rtype: A list
:returns: the xml necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(want, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_overridden(self, want, have):
""" The xml generator when state is overridden
:rtype: A list
:returns: the xml necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(have, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_merged(self, want, have):
""" The xml generator when state is merged
:rtype: A list
:returns: the xml necessary to merge the provided into
the current configuration
"""
intf_xml = []
for config in want:
root_node, unit_node = self._get_common_xml_node(config['name'])
build_child_xml_node(unit_node, 'name',
str(config['unit']))
if config.get('ipv4'):
self.build_ipaddr_et(config, unit_node)
if config.get('ipv6'):
self.build_ipaddr_et(config, unit_node, protocol='ipv6')
intf_xml.append(root_node)
return intf_xml
def build_ipaddr_et(self, config, unit_node, protocol='ipv4',
delete=False):
family = build_child_xml_node(unit_node, 'family')
inet = 'inet'
if protocol == 'ipv6':
inet = 'inet6'
ip_protocol = build_child_xml_node(family, inet)
for ip_addr in config[protocol]:
if ip_addr['address'] == 'dhcp' and protocol == 'ipv4':
build_child_xml_node(ip_protocol, 'dhcp')
else:
ip_addresses = build_child_xml_node(
ip_protocol, 'address')
build_child_xml_node(
ip_addresses, 'name', ip_addr['address'])
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current
configuration of the provided objects
"""
intf_xml = []
existing_l3_intfs = [l3_intf['name'] for l3_intf in have]
if not want:
want = have
for config in want:
if config['name'] not in existing_l3_intfs:
continue
else:
root_node, unit_node = self._get_common_xml_node(
config['name'])
build_child_xml_node(unit_node, 'name',
str(config['unit']))
family = build_child_xml_node(unit_node, 'family')
ipv4 = build_child_xml_node(family, 'inet')
intf = next(
(intf for intf in have if intf['name'] == config['name']),
None)
if 'ipv4' in intf:
if 'dhcp' in [x['address'] for x in intf.get('ipv4') if intf.get('ipv4') is not None]:
build_child_xml_node(ipv4, 'dhcp', None, {'delete': 'delete'})
else:
build_child_xml_node(
ipv4, 'address', None, {'delete': 'delete'})
ipv6 = build_child_xml_node(family, 'inet6')
build_child_xml_node(ipv6, 'address', None, {'delete': 'delete'})
intf_xml.append(root_node)
return intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/lacp/lacp.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_lacp class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node, build_subtree
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
class Lacp(ConfigBase):
"""
The junos_lacp class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lacp',
]
def __init__(self, module):
super(Lacp, self).__init__(module)
def get_lacp_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lacp_facts = facts['ansible_network_resources'].get('lacp')
if not lacp_facts:
return {}
return lacp_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
existing_lacp_facts = self.get_lacp_facts()
config_xmls = self.set_config(existing_lacp_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, [])
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_lacp_facts = self.get_lacp_facts()
result['before'] = existing_lacp_facts
if result['changed']:
result['after'] = changed_lacp_facts
return result
def set_config(self, existing_lacp_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lacp_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('chassis')
ethernet_ele = build_subtree(root, 'aggregated-devices/ethernet')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
ethernet_ele.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
lacp_xml = []
lacp_xml.extend(self._state_deleted(want, have))
lacp_xml.extend(self._state_merged(want, have))
return lacp_xml
def _state_overridden(self, want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
lacp_xml = []
lacp_xml.extend(self._state_deleted(want, have))
lacp_xml.extend(self._state_merged(want, have))
return lacp_xml
def _state_merged(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
lacp_xml = []
lacp_root = build_root_xml_node('lacp')
build_child_xml_node(lacp_root, 'system-priority', want.get('system_priority'))
if want.get('link_protection') == 'non-revertive':
build_subtree(lacp_root, 'link-protection/non-revertive')
elif want.get('link_protection') == 'revertive':
link_root = build_child_xml_node(lacp_root, 'link-protection')
build_child_xml_node(link_root, 'non-revertive', None, {'delete': 'delete'})
lacp_xml.append(lacp_root)
return lacp_xml
def _state_deleted(self, want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
lacp_xml = []
lacp_root = build_root_xml_node('lacp')
build_child_xml_node(lacp_root, 'system-priority', None, {'delete': 'delete'})
element = build_child_xml_node(lacp_root, 'link-protection', None, {'delete': 'delete'})
build_child_xml_node(element, 'non-revertive', None, {'delete': 'delete'})
lacp_xml.append(lacp_root)
return lacp_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/lacp_interfaces/lacp_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_lacp_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node, build_subtree
class Lacp_interfaces(ConfigBase):
"""
The junos_lacp_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lacp_interfaces',
]
def __init__(self, module):
super(Lacp_interfaces, self).__init__(module)
def get_lacp_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lacp_interfaces_facts = facts['ansible_network_resources'].get('lacp_interfaces')
if not lacp_interfaces_facts:
return []
return lacp_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
existing_lacp_interfaces_facts = self.get_lacp_interfaces_facts()
config_xmls = self.set_config(existing_lacp_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, [])
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_lacp_interfaces_facts = self.get_lacp_interfaces_facts()
result['before'] = existing_lacp_interfaces_facts
if result['changed']:
result['after'] = changed_lacp_interfaces_facts
return result
def set_config(self, existing_lacp_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lacp_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('interfaces')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is replaced
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(want, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_overridden(self, want, have):
""" The xml configuration generator when state is overridden
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
interface_xmls_obj = []
# replace interface config with data in want
interface_xmls_obj.extend(self._state_replaced(want, have))
# delete interface config if interface in have not present in want
delete_obj = []
for have_obj in have:
for want_obj in want:
if have_obj['name'] == want_obj['name']:
break
else:
delete_obj.append(have_obj)
if delete_obj:
interface_xmls_obj.extend(self._state_deleted(delete_obj, have))
return interface_xmls_obj
def _state_merged(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
intf_xml = []
for config in want:
lacp_intf_name = config['name']
lacp_intf_root = build_root_xml_node('interface')
build_child_xml_node(lacp_intf_root, 'name', lacp_intf_name)
if lacp_intf_name.startswith('ae'):
element = build_subtree(lacp_intf_root, 'aggregated-ether-options/lacp')
if config['period']:
build_child_xml_node(element, 'periodic', config['period'])
if config['sync_reset']:
build_child_xml_node(element, 'sync-reset', config['sync_reset'])
system = config['system']
if system:
mac = system.get('mac')
if mac:
if mac.get('address'):
build_child_xml_node(element, 'system-id', mac['address'])
if system.get('priority'):
build_child_xml_node(element, 'system-priority', system['priority'])
intf_xml.append(lacp_intf_root)
elif config['port_priority'] or config['force_up'] is not None:
element = build_subtree(lacp_intf_root, 'ether-options/ieee-802.3ad/lacp')
build_child_xml_node(element, 'port-priority', config['port_priority'])
if config['force_up'] is False:
build_child_xml_node(element, 'force-up', None, {'delete': 'delete'})
else:
build_child_xml_node(element, 'force-up')
intf_xml.append(lacp_intf_root)
return intf_xml
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current configuration
of the provided objects
"""
intf_xml = []
intf_obj = want
if not intf_obj:
# delete lag interfaces attribute for all the interface
intf_obj = have
for config in intf_obj:
lacp_intf_name = config['name']
lacp_intf_root = build_root_xml_node('interface')
build_child_xml_node(lacp_intf_root, 'name', lacp_intf_name)
if lacp_intf_name.startswith('ae'):
element = build_subtree(lacp_intf_root, 'aggregated-ether-options/lacp')
build_child_xml_node(element, 'periodic', None, {'delete': 'delete'})
build_child_xml_node(element, 'sync-reset', None, {'delete': 'delete'})
build_child_xml_node(element, 'system-id', None, {'delete': 'delete'})
build_child_xml_node(element, 'system-priority', None, {'delete': 'delete'})
else:
element = build_subtree(lacp_intf_root, 'ether-options/ieee-802.3ad/lacp')
build_child_xml_node(element, 'port-priority', None, {'delete': 'delete'})
build_child_xml_node(element, 'force-up', None, {'delete': 'delete'})
intf_xml.append(lacp_intf_root)
return intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/lag_interfaces/lag_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_lag_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.junos.utils.utils import get_resource_config
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node, build_subtree
class Lag_interfaces(ConfigBase):
"""
The junos_lag_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lag_interfaces',
]
def __init__(self, module):
super(Lag_interfaces, self).__init__(module)
def get_lag_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lag_interfaces_facts = facts['ansible_network_resources'].get('lag_interfaces')
if not lag_interfaces_facts:
return []
return lag_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
existing_lag_interfaces_facts = self.get_lag_interfaces_facts()
config_xmls = self.set_config(existing_lag_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, warnings)
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_lag_interfaces_facts = self.get_lag_interfaces_facts()
result['before'] = existing_lag_interfaces_facts
if result['changed']:
result['after'] = changed_lag_interfaces_facts
return result
def set_config(self, existing_lag_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lag_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('interfaces')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is replaced
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(want, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_overridden(self, want, have):
""" The xml configuration generator when state is overridden
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
interface_xmls_obj = []
# replace interface config with data in want
interface_xmls_obj.extend(self._state_replaced(want, have))
# delete interface config if interface in have not present in want
delete_obj = []
for have_obj in have:
for want_obj in want:
if have_obj['name'] == want_obj['name']:
break
else:
delete_obj.append(have_obj)
if delete_obj:
interface_xmls_obj.extend(self._state_deleted(delete_obj, have))
return interface_xmls_obj
def _state_merged(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
intf_xml = []
config_filter = """
<configuration>
<interfaces/>
</configuration>
"""
data = get_resource_config(self._connection, config_filter=config_filter)
for config in want:
lag_name = config['name']
# if lag interface not already configured fail module.
if not data.xpath("configuration/interfaces/interface[name='%s']" % lag_name):
self._module.fail_json(msg="lag interface %s not configured, configure interface"
" %s before assigning members to lag" % (lag_name, lag_name))
lag_intf_root = build_root_xml_node('interface')
build_child_xml_node(lag_intf_root, 'name', lag_name)
ether_options_node = build_subtree(lag_intf_root, 'aggregated-ether-options')
if config['mode']:
lacp_node = build_child_xml_node(ether_options_node, 'lacp')
build_child_xml_node(lacp_node, config['mode'])
link_protection = config['link_protection']
if link_protection:
build_child_xml_node(ether_options_node, 'link-protection')
elif link_protection is False:
build_child_xml_node(ether_options_node, 'link-protection', None, {'delete': 'delete'})
intf_xml.append(lag_intf_root)
members = config['members']
for member in members:
lag_member_intf_root = build_root_xml_node('interface')
build_child_xml_node(lag_member_intf_root, 'name', member['member'])
lag_node = build_subtree(lag_member_intf_root, 'ether-options/ieee-802.3ad')
build_child_xml_node(lag_node, 'bundle', config['name'])
link_type = member.get('link_type')
if link_type == "primary":
build_child_xml_node(lag_node, 'primary')
elif link_type == "backup":
build_child_xml_node(lag_node, 'backup')
intf_xml.append(lag_member_intf_root)
return intf_xml
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current configuration
of the provided objects
"""
intf_xml = []
intf_obj = want
if not intf_obj:
# delete lag interfaces attribute for all the interface
intf_obj = have
for config in intf_obj:
lag_name = config['name']
lag_intf_root = build_root_xml_node('interface')
build_child_xml_node(lag_intf_root, 'name', lag_name)
lag_node = build_subtree(lag_intf_root, 'aggregated-ether-options')
build_child_xml_node(lag_node, 'link-protection', None, {'delete': 'delete'})
lacp_node = build_child_xml_node(lag_node, 'lacp')
build_child_xml_node(lacp_node, 'active', None, {'delete': 'delete'})
build_child_xml_node(lacp_node, 'passive', None, {'delete': 'delete'})
intf_xml.append(lag_intf_root)
# delete lag configuration from member interfaces
for interface_obj in have:
if lag_name == interface_obj['name']:
for member in interface_obj.get('members', []):
lag_member_intf_root = build_root_xml_node('interface')
build_child_xml_node(lag_member_intf_root, 'name', member['member'])
lag_node = build_subtree(lag_member_intf_root, 'ether-options/ieee-802.3ad')
build_child_xml_node(lag_node, 'bundle', None, {'delete': 'delete'})
build_child_xml_node(lag_node, 'primary', None, {'delete': 'delete'})
build_child_xml_node(lag_node, 'backup', None, {'delete': 'delete'})
intf_xml.append(lag_member_intf_root)
return intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/lldp_global/lldp_global.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_lldp class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node
class Lldp_global(ConfigBase):
"""
The junos_lldp class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lldp_global',
]
def __init__(self, module):
super(Lldp_global, self).__init__(module)
def get_lldp_global_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lldp_facts = facts['ansible_network_resources'].get('lldp_global')
if not lldp_facts:
return {}
return lldp_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
existing_lldp_global_facts = self.get_lldp_global_facts()
config_xmls = self.set_config(existing_lldp_global_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, [])
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_lldp_global_facts = self.get_lldp_global_facts()
result['before'] = existing_lldp_global_facts
if result['changed']:
result['after'] = changed_lldp_global_facts
return result
def set_config(self, existing_lldp_global_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lldp_global_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('protocols')
state = self._module.params['state']
if state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
lldp_xml = []
lldp_xml.extend(self._state_deleted(want, have))
lldp_xml.extend(self._state_merged(want, have))
return lldp_xml
def _state_merged(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the list xml configuration necessary to migrate the current configuration
to the desired configuration
"""
lldp_xml = []
lldp_root = build_root_xml_node('lldp')
if want.get('address'):
build_child_xml_node(lldp_root, 'management-address', want['address'])
if want.get('interval'):
build_child_xml_node(lldp_root, 'advertisement-interval', want['interval'])
if want.get('transmit_delay'):
build_child_xml_node(lldp_root, 'transmit-delay', want['transmit_delay'])
if want.get('hold_multiplier'):
build_child_xml_node(lldp_root, 'hold-multiplier', want['hold_multiplier'])
enable = want.get('enable')
if enable is not None:
if enable is False:
build_child_xml_node(lldp_root, 'disable')
else:
build_child_xml_node(lldp_root, 'disable', None, {'delete': 'delete'})
else:
build_child_xml_node(lldp_root, 'disable', None, {'delete': 'delete'})
lldp_xml.append(lldp_root)
return lldp_xml
def _state_deleted(self, want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
lldp_xml = []
lldp_root = build_root_xml_node('lldp')
build_child_xml_node(lldp_root, 'management-address', None, {'delete': 'delete'})
build_child_xml_node(lldp_root, 'advertisement-interval', None, {'delete': 'delete'})
build_child_xml_node(lldp_root, 'transmit-delay', None, {'delete': 'delete'})
build_child_xml_node(lldp_root, 'hold-multiplier', None, {'delete': 'delete'})
build_child_xml_node(lldp_root, 'disable', None, {'delete': 'delete'})
lldp_xml.append(lldp_root)
return lldp_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/lldp_interfaces/lldp_interfaces.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos_lldp_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import locked_config, load_config, commit_configuration, discard_changes, tostring
from ansible.module_utils.network.common.netconf import build_root_xml_node, build_child_xml_node, build_subtree
class Lldp_interfaces(ConfigBase):
"""
The junos_lldp_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lldp_interfaces',
]
def __init__(self, module):
super(Lldp_interfaces, self).__init__(module)
def get_lldp_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lldp_interfaces_facts = facts['ansible_network_resources'].get('lldp_interfaces')
if not lldp_interfaces_facts:
return []
return lldp_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
existing_lldp_interfaces_facts = self.get_lldp_interfaces_facts()
config_xmls = self.set_config(existing_lldp_interfaces_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, warnings)
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_lldp_interfaces_facts = self.get_lldp_interfaces_facts()
result['before'] = existing_lldp_interfaces_facts
if result['changed']:
result['after'] = changed_lldp_interfaces_facts
return result
def set_config(self, existing_lldp_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lldp_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('protocols')
lldp_intf_ele = build_subtree(root, 'lldp')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
lldp_intf_ele.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The xml configuration generator when state is replaced
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
lldp_intf_xml = []
lldp_intf_xml.extend(self._state_deleted(want, have))
lldp_intf_xml.extend(self._state_merged(want, have))
return lldp_intf_xml
def _state_overridden(self, want, have):
""" The xml configuration generator when state is overridden
:rtype: A list
:returns: the xml configuration necessary to migrate the current configuration
to the desired configuration
"""
lldp_intf_xmls_obj = []
# replace interface config with data in want
lldp_intf_xmls_obj.extend(self._state_replaced(want, have))
# delete interface config if interface in have not present in want
delete_obj = []
for have_obj in have:
for want_obj in want:
if have_obj['name'] == want_obj['name']:
break
else:
delete_obj.append(have_obj)
if len(delete_obj):
lldp_intf_xmls_obj.extend(self._state_deleted(delete_obj, have))
return lldp_intf_xmls_obj
def _state_merged(self, want, have):
""" The xml configuration generator when state is merged
:rtype: A list
:returns: the xml configuration necessary to merge the provided into
the current configuration
"""
lldp_intf_xml = []
for config in want:
lldp_intf_root = build_root_xml_node('interface')
if config.get('name'):
build_child_xml_node(lldp_intf_root, 'name', config['name'])
if config.get('enable') is not None:
if config['enable'] is False:
build_child_xml_node(lldp_intf_root, 'disable')
else:
build_child_xml_node(lldp_intf_root, 'disable', None, {'delete': 'delete'})
else:
build_child_xml_node(lldp_intf_root, 'disable', None, {'delete': 'delete'})
lldp_intf_xml.append(lldp_intf_root)
return lldp_intf_xml
def _state_deleted(self, want, have):
""" The xml configuration generator when state is deleted
:rtype: A list
:returns: the xml configuration necessary to remove the current configuration
of the provided objects
"""
lldp_intf_xml = []
intf_obj = want
if not intf_obj:
# delete lldp interfaces attribute from all the existing interface
intf_obj = have
for config in intf_obj:
lldp_intf_root = build_root_xml_node('interface')
lldp_intf_root.attrib.update({'delete': 'delete'})
build_child_xml_node(lldp_intf_root, 'name', config['name'])
lldp_intf_xml.append(lldp_intf_root)
return lldp_intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
lib/ansible/module_utils/network/junos/config/vlans/vlans.py
|
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
The junos_vlans class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.junos.facts.facts import Facts
from ansible.module_utils.network.junos.junos import (locked_config,
load_config,
commit_configuration,
discard_changes,
tostring)
from ansible.module_utils.network.common.netconf import (build_root_xml_node,
build_child_xml_node)
class Vlans(ConfigBase):
"""
The junos_vlans class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'vlans',
]
def __init__(self, module):
super(Vlans, self).__init__(module)
def get_vlans_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(
self.gather_subset, self.gather_network_resources)
vlans_facts = facts['ansible_network_resources'].get('vlans')
if not vlans_facts:
return []
return vlans_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
existing_vlans_facts = self.get_vlans_facts()
config_xmls = self.set_config(existing_vlans_facts)
with locked_config(self._module):
for config_xml in to_list(config_xmls):
diff = load_config(self._module, config_xml, warnings)
commit = not self._module.check_mode
if diff:
if commit:
commit_configuration(self._module)
else:
discard_changes(self._module)
result['changed'] = True
if self._module._diff:
result['diff'] = {'prepared': diff}
result['xml'] = config_xmls
changed_vlans_facts = self.get_vlans_facts()
result['before'] = existing_vlans_facts
if result['changed']:
result['after'] = changed_vlans_facts
result['warnings'] = warnings
return result
def set_config(self, existing_vlans_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_vlans_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
root = build_root_xml_node('vlans')
state = self._module.params['state']
if state == 'overridden':
config_xmls = self._state_overridden(want, have)
elif state == 'deleted':
config_xmls = self._state_deleted(want, have)
elif state == 'merged':
config_xmls = self._state_merged(want, have)
elif state == 'replaced':
config_xmls = self._state_replaced(want, have)
for xml in config_xmls:
root.append(xml)
return tostring(root)
def _state_replaced(self, want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the xml necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(want, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_overridden(self, want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the xml necessary to migrate the current configuration
to the desired configuration
"""
intf_xml = []
intf_xml.extend(self._state_deleted(have, have))
intf_xml.extend(self._state_merged(want, have))
return intf_xml
def _state_merged(self, want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the xml necessary to merge the provided into
the current configuration
"""
intf_xml = []
for config in want:
vlan_name = str(config['name'])
vlan_id = str(config['vlan_id'])
vlan_description = config.get('description')
vlan_root = build_root_xml_node('vlan')
build_child_xml_node(vlan_root, 'name', vlan_name)
build_child_xml_node(vlan_root, 'vlan-id', vlan_id)
if vlan_description:
build_child_xml_node(vlan_root, 'description',
vlan_description)
intf_xml.append(vlan_root)
return intf_xml
def _state_deleted(self, want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the xml necessary to remove the current configuration
of the provided objects
"""
intf_xml = []
if not want:
want = have
for config in want:
vlan_name = config['name']
vlan_root = build_root_xml_node('vlan')
vlan_root.attrib.update({'delete': 'delete'})
build_child_xml_node(vlan_root, 'name', vlan_name)
intf_xml.append(vlan_root)
return intf_xml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,773 |
junos resource modules are documented with parameter `commands` but use the return value `xml`
|
##### SUMMARY
in the module documentation we have the return value set to `before` and `after` and `commands` for all resource modules. At some point this changed for only junos resource modules, and only for the parameter `commands`. It has been changed to `xml`. I think this *should* be *commands* and just return the xml payload under the variable named *commands*
this file can be found here: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/network/junos/config/l2_interfaces/l2_interfaces.py#L80
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_l3_interfaces
junos_l2_interfaces
probably all junos resource modules
##### ANSIBLE VERSION
```paste below
latest dev
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
RHEL 7.6
##### STEPS TO REPRODUCE
```yaml
[student1@ansible ~]$ cat junos.yml
---
- hosts: rtr3
gather_facts: false
tasks:
- name: grab info
junos_facts:
gather_subset: min
gather_network_resources: l3_interfaces
- debug:
var: ansible_network_resources
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
the variable `commands` would return the payload versus the variable named `xml` that is not documented
like this
```
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
commands:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [ensure that the IP address information is accurate] ***********************************************************************
[WARNING]: Platform linux on host rtr3 is using the discovered Python interpreter at /usr/bin/python, but future installation
of another Python interpreter could change this. See
https://docs.ansible.com/ansible/devel/reference_appendices/interpreter_discovery.html for more information.
changed: [rtr3] => changed=true
after:
- name: lo0
unit: '0'
ansible_facts:
discovered_interpreter_python: /usr/bin/python
before:
- ipv4:
- address: 10.10.10.1/24
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: lo0
unit: '0'
xml:
- <nc:interfaces xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0"><nc:interface><nc:name>lo0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete="delete"/></nc:inet><nc:inet6><nc:address delete="delete"/></nc:inet6></nc:family></nc:unit></nc:interface></nc:interfaces>
```
|
https://github.com/ansible/ansible/issues/61773
|
https://github.com/ansible/ansible/pull/62041
|
6fb1d56fdc022cb6001539ea4bbc87d759093987
|
ff53ca76b83d151fb05ba0c69def6089dc893135
| 2019-09-04T14:10:24Z |
python
| 2019-09-11T05:36:08Z |
test/integration/targets/junos_l3_interfaces/tests/netconf/junos_l3_interfaces.yml
|
---
- name: bootstrap interfaces
junos_l3_interfaces:
config:
- name: ge-1/0/0
ipv4:
- address: 192.168.100.1/24
- address: 10.200.16.20/24
- name: ge-2/0/0
ipv4:
- address: 192.168.100.2/24
- address: 10.200.16.21/24
- name: ge-3/0/0
ipv4:
- address: 192.168.100.3/24
- address: 10.200.16.22/24
state: replaced
register: result
- assert:
that:
- result is changed
- "'<nc:address><nc:name>192.168.100.1/24</nc:name></nc:address>' in result.xml[0]"
- "'<nc:address><nc:name>10.200.16.20/24</nc:name></nc:address>' in result.xml[0]"
- "result.after[0].name == 'ge-1/0/0'"
- "result.after[0].ipv4[0]['address'] == '192.168.100.1/24'"
- "result.after[0].ipv4[1]['address'] == '10.200.16.20/24'"
- name: bootstrap interfaces (idempotent)
junos_l3_interfaces:
config:
- name: ge-1/0/0
ipv4:
- address: 192.168.100.1/24
- address: 10.200.16.20/24
- name: ge-2/0/0
ipv4:
- address: 192.168.100.2/24
- address: 10.200.16.21/24
- name: ge-3/0/0
ipv4:
- address: 192.168.100.3/24
- address: 10.200.16.22/24
state: replaced
register: result
- assert:
that:
- result is not changed
- name: Add another interface ip
junos_l3_interfaces:
config:
- name: ge-1/0/0
ipv4:
- address: 100.64.0.1/10
- address: 100.64.0.2/10
state: merged
register: result
- assert:
that:
- result is changed
- "'<nc:address><nc:name>100.64.0.1/10</nc:name></nc:address>' in result.xml[0]"
- "'<nc:address><nc:name>100.64.0.2/10</nc:name></nc:address>' in result.xml[0]"
- "result.after[0].name == 'ge-1/0/0'"
- "result.after[0].ipv4[0]['address'] == '192.168.100.1/24'"
- "result.after[0].ipv4[1]['address'] == '10.200.16.20/24'"
- "result.after[0].ipv4[2]['address'] == '100.64.0.1/10'"
- "result.after[0].ipv4[3]['address'] == '100.64.0.2/10'"
- name: Delete ge-2/0/0 interface config
junos_l3_interfaces:
config:
- name: ge-2/0/0
state: deleted
register: result
- assert:
that:
- result is changed
- "'<nc:name>ge-2/0/0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:address delete=\"delete\"/>' in result.xml[0]"
- name: Override all config
junos_l3_interfaces:
config:
- name: ge-1/0/0
ipv4:
- address: dhcp
- name: fxp0
ipv4:
- address: dhcp
state: overridden
register: result
- assert:
that:
- result is changed
- "'<nc:name>fxp0</nc:name><nc:unit><nc:name>0</nc:name><nc:family><nc:inet><nc:dhcp/></nc:inet>' in result.xml[0]"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,113 |
Imports from `ansible_collections` don't work in `conftest` modules.
|
##### SUMMARY
(converted from project card created by @webknjaz)
When conftest in unit test contains something like `from ansible_collections.yum_unit_test_migration_spec.yum_collection.tests.unit.compat.mock import patch`, it will result in `E ModuleNotFoundError: No module named 'ansible_collections'`.
The workaround is to add a boilerplate `conftest.py` into tests root: https://github.com/webknjaz/fqcn_conftest_repro/commit/b07826c54e01c6965cb63aa0a8cada602a311911
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Repro: https://github.com/webknjaz/fqcn_conftest_repro/tree/77347551f5fdd5185b26c752074878ea37055b77
Failure demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216648897#step:5:41
Hack demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216654164#step:5:65
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail.
|
https://github.com/ansible/ansible/issues/62113
|
https://github.com/ansible/ansible/pull/62119
|
9b149917a63ead90f49918632cb64ea2892f50da
|
aaa6d2ecb079498eab9f7f3b32f30c89378818d4
| 2019-09-11T02:39:46Z |
python
| 2019-09-11T06:27:05Z |
test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_collections.py
|
"""Enable unit testing of Ansible collections."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
# set by ansible-test to a single directory, rather than a list of directories as supported by Ansible itself
ANSIBLE_COLLECTIONS_PATH = os.path.join(os.environ['ANSIBLE_COLLECTIONS_PATHS'], 'ansible_collections')
def collection_pypkgpath(self):
"""Configure the Python package path so that pytest can find our collections."""
for parent in self.parts(reverse=True):
if str(parent) == ANSIBLE_COLLECTIONS_PATH:
return parent
raise Exception('File "%s" not found in collection path "%s".' % (self.strpath, ANSIBLE_COLLECTIONS_PATH))
def pytest_configure():
"""Configure this pytest plugin."""
from ansible.utils.collection_loader import AnsibleCollectionLoader
# allow unit tests to import code from collections
sys.meta_path.insert(0, AnsibleCollectionLoader())
# noinspection PyProtectedMember
import py._path.local
# force collections unit tests to be loaded with the ansible_collections namespace
# original idea from https://stackoverflow.com/questions/50174130/how-do-i-pytest-a-project-using-pep-420-namespace-packages/50175552#50175552
# noinspection PyProtectedMember
py._path.local.LocalPath.pypkgpath = collection_pypkgpath # pylint: disable=protected-access
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,113 |
Imports from `ansible_collections` don't work in `conftest` modules.
|
##### SUMMARY
(converted from project card created by @webknjaz)
When conftest in unit test contains something like `from ansible_collections.yum_unit_test_migration_spec.yum_collection.tests.unit.compat.mock import patch`, it will result in `E ModuleNotFoundError: No module named 'ansible_collections'`.
The workaround is to add a boilerplate `conftest.py` into tests root: https://github.com/webknjaz/fqcn_conftest_repro/commit/b07826c54e01c6965cb63aa0a8cada602a311911
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Repro: https://github.com/webknjaz/fqcn_conftest_repro/tree/77347551f5fdd5185b26c752074878ea37055b77
Failure demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216648897#step:5:41
Hack demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216654164#step:5:65
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail.
|
https://github.com/ansible/ansible/issues/62113
|
https://github.com/ansible/ansible/pull/62119
|
9b149917a63ead90f49918632cb64ea2892f50da
|
aaa6d2ecb079498eab9f7f3b32f30c89378818d4
| 2019-09-11T02:39:46Z |
python
| 2019-09-11T06:27:05Z |
test/lib/ansible_test/_data/pytest/plugins/ansible_pytest_coverage.py
|
"""Monkey patch os._exit when running under coverage so we don't lose coverage data in forks, such as with `pytest --boxed`."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
def pytest_configure():
"""Configure this pytest plugin."""
try:
import coverage
except ImportError:
coverage = None
try:
coverage.Coverage
except AttributeError:
coverage = None
if not coverage:
return
import gc
import os
coverage_instances = []
for obj in gc.get_objects():
if isinstance(obj, coverage.Coverage):
coverage_instances.append(obj)
if not coverage_instances:
coverage_config = os.environ.get('COVERAGE_CONF')
if not coverage_config:
return
coverage_output = os.environ.get('COVERAGE_FILE')
if not coverage_output:
return
cov = coverage.Coverage(config_file=coverage_config)
coverage_instances.append(cov)
else:
cov = None
# noinspection PyProtectedMember
os_exit = os._exit # pylint: disable=protected-access
def coverage_exit(*args, **kwargs):
for instance in coverage_instances:
instance.stop()
instance.save()
os_exit(*args, **kwargs)
os._exit = coverage_exit # pylint: disable=protected-access
if cov:
cov.start()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,113 |
Imports from `ansible_collections` don't work in `conftest` modules.
|
##### SUMMARY
(converted from project card created by @webknjaz)
When conftest in unit test contains something like `from ansible_collections.yum_unit_test_migration_spec.yum_collection.tests.unit.compat.mock import patch`, it will result in `E ModuleNotFoundError: No module named 'ansible_collections'`.
The workaround is to add a boilerplate `conftest.py` into tests root: https://github.com/webknjaz/fqcn_conftest_repro/commit/b07826c54e01c6965cb63aa0a8cada602a311911
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Repro: https://github.com/webknjaz/fqcn_conftest_repro/tree/77347551f5fdd5185b26c752074878ea37055b77
Failure demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216648897#step:5:41
Hack demo: https://github.com/webknjaz/fqcn_conftest_repro/runs/216654164#step:5:65
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail.
|
https://github.com/ansible/ansible/issues/62113
|
https://github.com/ansible/ansible/pull/62119
|
9b149917a63ead90f49918632cb64ea2892f50da
|
aaa6d2ecb079498eab9f7f3b32f30c89378818d4
| 2019-09-11T02:39:46Z |
python
| 2019-09-11T06:27:05Z |
test/lib/ansible_test/_internal/units/__init__.py
|
"""Execute unit tests using pytest."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ..util import (
ANSIBLE_TEST_DATA_ROOT,
display,
get_available_python_versions,
is_subdir,
SubprocessError,
REMOTE_ONLY_PYTHON_VERSIONS,
)
from ..util_common import (
intercept_command,
ResultType,
handle_layout_messages,
)
from ..ansible_util import (
ansible_environment,
check_pyyaml,
)
from ..target import (
walk_internal_targets,
walk_units_targets,
)
from ..config import (
UnitsConfig,
)
from ..coverage_util import (
coverage_context,
)
from ..data import (
data_context,
)
from ..executor import (
AllTargetsSkipped,
Delegate,
get_changes_filter,
install_command_requirements,
SUPPORTED_PYTHON_VERSIONS,
)
def command_units(args):
"""
:type args: UnitsConfig
"""
handle_layout_messages(data_context().content.unit_messages)
changes = get_changes_filter(args)
require = args.require + changes
include = walk_internal_targets(walk_units_targets(), args.include, args.exclude, require)
paths = [target.path for target in include]
remote_paths = [path for path in paths
if is_subdir(path, data_context().content.unit_module_path)
or is_subdir(path, data_context().content.unit_module_utils_path)]
if not paths:
raise AllTargetsSkipped()
if args.python and args.python in REMOTE_ONLY_PYTHON_VERSIONS and not remote_paths:
raise AllTargetsSkipped()
if args.delegate:
raise Delegate(require=changes, exclude=args.exclude)
version_commands = []
available_versions = sorted(get_available_python_versions(list(SUPPORTED_PYTHON_VERSIONS)).keys())
for version in SUPPORTED_PYTHON_VERSIONS:
# run all versions unless version given, in which case run only that version
if args.python and version != args.python_version:
continue
if not args.python and version not in available_versions:
display.warning("Skipping unit tests on Python %s due to missing interpreter." % version)
continue
if args.requirements_mode != 'skip':
install_command_requirements(args, version)
env = ansible_environment(args)
cmd = [
'pytest',
'--boxed',
'-r', 'a',
'-n', str(args.num_workers) if args.num_workers else 'auto',
'--color',
'yes' if args.color else 'no',
'-p', 'no:cacheprovider',
'-c', os.path.join(ANSIBLE_TEST_DATA_ROOT, 'pytest.ini'),
'--junit-xml', os.path.join(ResultType.JUNIT.path, 'python%s-units.xml' % version),
]
if not data_context().content.collection:
cmd.append('--durations=25')
if version != '2.6':
# added in pytest 4.5.0, which requires python 2.7+
cmd.append('--strict-markers')
plugins = []
if args.coverage:
plugins.append('ansible_pytest_coverage')
if data_context().content.collection:
plugins.append('ansible_pytest_collections')
if plugins:
env['PYTHONPATH'] += ':%s' % os.path.join(ANSIBLE_TEST_DATA_ROOT, 'pytest/plugins')
for plugin in plugins:
cmd.extend(['-p', plugin])
if args.collect_only:
cmd.append('--collect-only')
if args.verbosity:
cmd.append('-' + ('v' * args.verbosity))
if version in REMOTE_ONLY_PYTHON_VERSIONS:
test_paths = remote_paths
else:
test_paths = paths
if not test_paths:
continue
cmd.extend(test_paths)
version_commands.append((version, cmd, env))
if args.requirements_mode == 'only':
sys.exit()
for version, command, env in version_commands:
check_pyyaml(args, version)
display.info('Unit test with Python %s' % version)
try:
with coverage_context(args):
intercept_command(args, command, target_name='units', env=env, python_version=version)
except SubprocessError as ex:
# pytest exits with status code 5 when all tests are skipped, which isn't an error for our use case
if ex.status != 5:
raise
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,973 |
Support creating LUKS2 container in luks_device module
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Ansible by default only creates LUKS1 format devices. Considering LUKS2 extra features and age, I think it would be beneficial to allow users to create LUKS2 devices.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
luks_device
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
For reliability, one should use either disk UUID's or labels. UUID's are (in my opinion) a hassle when it comes to using them in Ansible.
Enter labels. Labels can be the same across multiple hosts, and will be tied to the right disk/partition, even if sda1 suddenly becomes sdb1 after some hardware reconfiguration.
Now, the LUKS1 format, which Ansible creates by default with the `luks_device` module, does not support labels... Support was added with LUKS2.
It should be noted that, from what I can see, the module *does already support removing keys from LUKS2 format devices*.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: create LUKS2 container
luks_device:
device: "/dev/loop0"
state: "present"
keyfile: "/vault/keyfile"
format: luks2
```
|
https://github.com/ansible/ansible/issues/58973
|
https://github.com/ansible/ansible/pull/61812
|
5eb5f740838d447287ed49533c76864247f731a7
|
5b3526535c9e2f9e117e73d8fbea24b6f9c849ef
| 2019-07-11T11:43:46Z |
python
| 2019-09-11T18:45:33Z |
changelogs/fragments/58973-luks_device_add-type-option.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,973 |
Support creating LUKS2 container in luks_device module
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Ansible by default only creates LUKS1 format devices. Considering LUKS2 extra features and age, I think it would be beneficial to allow users to create LUKS2 devices.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
luks_device
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
For reliability, one should use either disk UUID's or labels. UUID's are (in my opinion) a hassle when it comes to using them in Ansible.
Enter labels. Labels can be the same across multiple hosts, and will be tied to the right disk/partition, even if sda1 suddenly becomes sdb1 after some hardware reconfiguration.
Now, the LUKS1 format, which Ansible creates by default with the `luks_device` module, does not support labels... Support was added with LUKS2.
It should be noted that, from what I can see, the module *does already support removing keys from LUKS2 format devices*.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: create LUKS2 container
luks_device:
device: "/dev/loop0"
state: "present"
keyfile: "/vault/keyfile"
format: luks2
```
|
https://github.com/ansible/ansible/issues/58973
|
https://github.com/ansible/ansible/pull/61812
|
5eb5f740838d447287ed49533c76864247f731a7
|
5b3526535c9e2f9e117e73d8fbea24b6f9c849ef
| 2019-07-11T11:43:46Z |
python
| 2019-09-11T18:45:33Z |
lib/ansible/modules/crypto/luks_device.py
|
#!/usr/bin/python
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: luks_device
short_description: Manage encrypted (LUKS) devices
version_added: "2.8"
description:
- "Module manages L(LUKS,https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup)
on given device. Supports creating, destroying, opening and closing of
LUKS container and adding or removing new keys."
options:
device:
description:
- "Device to work with (e.g. C(/dev/sda1)). Needed in most cases.
Can be omitted only when I(state=closed) together with I(name)
is provided."
type: str
state:
description:
- "Desired state of the LUKS container. Based on its value creates,
destroys, opens or closes the LUKS container on a given device."
- "I(present) will create LUKS container unless already present.
Requires I(device) and I(keyfile) options to be provided."
- "I(absent) will remove existing LUKS container if it exists.
Requires I(device) or I(name) to be specified."
- "I(opened) will unlock the LUKS container. If it does not exist
it will be created first.
Requires I(device) and I(keyfile) to be specified. Use
the I(name) option to set the name of the opened container.
Otherwise the name will be generated automatically and returned
as a part of the result."
- "I(closed) will lock the LUKS container. However if the container
does not exist it will be created.
Requires I(device) and I(keyfile) options to be provided. If
container does already exist I(device) or I(name) will suffice."
type: str
default: present
choices: [present, absent, opened, closed]
name:
description:
- "Sets container name when I(state=opened). Can be used
instead of I(device) when closing the existing container
(i.e. when I(state=closed))."
type: str
keyfile:
description:
- "Used to unlock the container and needed for most
of the operations. Parameter value is the path
to the keyfile with the passphrase."
- "BEWARE that working with keyfiles in plaintext is dangerous.
Make sure that they are protected."
type: path
keysize:
description:
- "Sets the key size only if LUKS container does not exist."
type: int
version_added: '2.10'
new_keyfile:
description:
- "Adds additional key to given container on I(device).
Needs I(keyfile) option for authorization. LUKS container
supports up to 8 keys. Parameter value is the path
to the keyfile with the passphrase."
- "NOTE that adding additional keys is I(not idempotent).
A new keyslot will be used even if another keyslot already
exists for this keyfile."
- "BEWARE that working with keyfiles in plaintext is dangerous.
Make sure that they are protected."
type: path
remove_keyfile:
description:
- "Removes given key from the container on I(device). Does not
remove the keyfile from filesystem.
Parameter value is the path to the keyfile with the passphrase."
- "NOTE that removing keys is I(not idempotent). Trying to remove
a key which no longer exists results in an error."
- "NOTE that to remove the last key from a LUKS container, the
I(force_remove_last_key) option must be set to C(yes)."
- "BEWARE that working with keyfiles in plaintext is dangerous.
Make sure that they are protected."
type: path
force_remove_last_key:
description:
- "If set to C(yes), allows removing the last key from a container."
- "BEWARE that when the last key has been removed from a container,
the container can no longer be opened!"
type: bool
default: no
label:
description:
- "This option allow the user to create a LUKS2 format container
with label support, respectively to identify the container by
label on later usages."
- "Will only be used on container creation, or when I(device) is
not specified."
type: str
version_added: "2.10"
uuid:
description:
- "With this option user can identify the LUKS container by UUID."
- "Will only be used when I(device) and I(label) are not specified."
type: str
version_added: "2.10"
requirements:
- "cryptsetup"
- "wipefs (when I(state) is C(absent))"
- "lsblk"
- "blkid (when I(label) or I(uuid) options are used)"
author:
"Jan Pokorny (@japokorn)"
'''
EXAMPLES = '''
- name: create LUKS container (remains unchanged if it already exists)
luks_device:
device: "/dev/loop0"
state: "present"
keyfile: "/vault/keyfile"
- name: (create and) open the LUKS container; name it "mycrypt"
luks_device:
device: "/dev/loop0"
state: "opened"
name: "mycrypt"
keyfile: "/vault/keyfile"
- name: close the existing LUKS container "mycrypt"
luks_device:
state: "closed"
name: "mycrypt"
- name: make sure LUKS container exists and is closed
luks_device:
device: "/dev/loop0"
state: "closed"
keyfile: "/vault/keyfile"
- name: create container if it does not exist and add new key to it
luks_device:
device: "/dev/loop0"
state: "present"
keyfile: "/vault/keyfile"
new_keyfile: "/vault/keyfile2"
- name: add new key to the LUKS container (container has to exist)
luks_device:
device: "/dev/loop0"
keyfile: "/vault/keyfile"
new_keyfile: "/vault/keyfile2"
- name: remove existing key from the LUKS container
luks_device:
device: "/dev/loop0"
remove_keyfile: "/vault/keyfile2"
- name: completely remove the LUKS container and its contents
luks_device:
device: "/dev/loop0"
state: "absent"
- name: create a container with label
luks_device:
device: "/dev/loop0"
state: "present"
keyfile: "/vault/keyfile"
label: personalLabelName
- name: open the LUKS container based on label without device; name it "mycrypt"
luks_device:
label: "personalLabelName"
state: "opened"
name: "mycrypt"
keyfile: "/vault/keyfile"
- name: close container based on UUID
luks_device:
uuid: 03ecd578-fad4-4e6c-9348-842e3e8fa340
state: "closed"
name: "mycrypt"
'''
RETURN = '''
name:
description:
When I(state=opened) returns (generated or given) name
of LUKS container. Returns None if no name is supplied.
returned: success
type: str
sample: "luks-c1da9a58-2fde-4256-9d9f-6ab008b4dd1b"
'''
import os
import re
import stat
from ansible.module_utils.basic import AnsibleModule
RETURN_CODE = 0
STDOUT = 1
STDERR = 2
# used to get <luks-name> out of lsblk output in format 'crypt <luks-name>'
# regex takes care of any possible blank characters
LUKS_NAME_REGEX = re.compile(r'\s*crypt\s+([^\s]*)\s*')
# used to get </luks/device> out of lsblk output
# in format 'device: </luks/device>'
LUKS_DEVICE_REGEX = re.compile(r'\s*device:\s+([^\s]*)\s*')
class Handler(object):
def __init__(self, module):
self._module = module
self._lsblk_bin = self._module.get_bin_path('lsblk', True)
def _run_command(self, command):
return self._module.run_command(command)
def get_device_by_uuid(self, uuid):
''' Returns the device that holds UUID passed by user
'''
self._blkid_bin = self._module.get_bin_path('blkid', True)
uuid = self._module.params['uuid']
if uuid is None:
return None
result = self._run_command([self._blkid_bin, '--uuid', uuid])
if result[RETURN_CODE] != 0:
return None
return result[STDOUT].strip()
def get_device_by_label(self, label):
''' Returns the device that holds label passed by user
'''
self._blkid_bin = self._module.get_bin_path('blkid', True)
label = self._module.params['label']
if label is None:
return None
result = self._run_command([self._blkid_bin, '--label', label])
if result[RETURN_CODE] != 0:
return None
return result[STDOUT].strip()
def generate_luks_name(self, device):
''' Generate name for luks based on device UUID ('luks-<UUID>').
Raises ValueError when obtaining of UUID fails.
'''
result = self._run_command([self._lsblk_bin, '-n', device, '-o', 'UUID'])
if result[RETURN_CODE] != 0:
raise ValueError('Error while generating LUKS name for %s: %s'
% (device, result[STDERR]))
dev_uuid = result[STDOUT].strip()
return 'luks-%s' % dev_uuid
class CryptHandler(Handler):
def __init__(self, module):
super(CryptHandler, self).__init__(module)
self._cryptsetup_bin = self._module.get_bin_path('cryptsetup', True)
def get_container_name_by_device(self, device):
''' obtain LUKS container name based on the device where it is located
return None if not found
raise ValueError if lsblk command fails
'''
result = self._run_command([self._lsblk_bin, device, '-nlo', 'type,name'])
if result[RETURN_CODE] != 0:
raise ValueError('Error while obtaining LUKS name for %s: %s'
% (device, result[STDERR]))
m = LUKS_NAME_REGEX.search(result[STDOUT])
try:
name = m.group(1)
except AttributeError:
name = None
return name
def get_container_device_by_name(self, name):
''' obtain device name based on the LUKS container name
return None if not found
raise ValueError if lsblk command fails
'''
# apparently each device can have only one LUKS container on it
result = self._run_command([self._cryptsetup_bin, 'status', name])
if result[RETURN_CODE] != 0:
return None
m = LUKS_DEVICE_REGEX.search(result[STDOUT])
device = m.group(1)
return device
def is_luks(self, device):
''' check if the LUKS container does exist
'''
result = self._run_command([self._cryptsetup_bin, 'isLuks', device])
return result[RETURN_CODE] == 0
def run_luks_create(self, device, keyfile, keysize):
# create a new luks container; use batch mode to auto confirm
label = self._module.params.get('label')
options = []
if keysize is not None:
options.append('--key-size=' + str(keysize))
if label is not None:
# create luks container v2 with label
options.extend(['--type', 'luks2', '--label', label])
args = [self._cryptsetup_bin, 'luksFormat']
args.extend(options)
args.extend(['-q', device, keyfile])
result = self._run_command(args)
if result[RETURN_CODE] != 0:
raise ValueError('Error while creating LUKS on %s: %s'
% (device, result[STDERR]))
def run_luks_open(self, device, keyfile, name):
result = self._run_command([self._cryptsetup_bin, '--key-file', keyfile,
'open', '--type', 'luks', device, name])
if result[RETURN_CODE] != 0:
raise ValueError('Error while opening LUKS container on %s: %s'
% (device, result[STDERR]))
def run_luks_close(self, name):
result = self._run_command([self._cryptsetup_bin, 'close', name])
if result[RETURN_CODE] != 0:
raise ValueError('Error while closing LUKS container %s' % (name))
def run_luks_remove(self, device):
wipefs_bin = self._module.get_bin_path('wipefs', True)
name = self.get_container_name_by_device(device)
if name is not None:
self.run_luks_close(name)
result = self._run_command([wipefs_bin, '--all', device])
if result[RETURN_CODE] != 0:
raise ValueError('Error while wiping luks container %s: %s'
% (device, result[STDERR]))
def run_luks_add_key(self, device, keyfile, new_keyfile):
''' Add new key to given 'device'; authentication done using 'keyfile'
Raises ValueError when command fails
'''
result = self._run_command([self._cryptsetup_bin, 'luksAddKey', device,
new_keyfile, '--key-file', keyfile])
if result[RETURN_CODE] != 0:
raise ValueError('Error while adding new LUKS key to %s: %s'
% (device, result[STDERR]))
def run_luks_remove_key(self, device, keyfile, force_remove_last_key=False):
''' Remove key from given device
Raises ValueError when command fails
'''
if not force_remove_last_key:
result = self._run_command([self._cryptsetup_bin, 'luksDump', device])
if result[RETURN_CODE] != 0:
raise ValueError('Error while dumping LUKS header from %s'
% (device, ))
keyslot_count = 0
keyslot_area = False
keyslot_re = re.compile(r'^Key Slot [0-9]+: ENABLED')
for line in result[STDOUT].splitlines():
if line.startswith('Keyslots:'):
keyslot_area = True
elif line.startswith(' '):
# LUKS2 header dumps use human-readable indented output.
# Thus we have to look out for 'Keyslots:' and count the
# number of indented keyslot numbers.
if keyslot_area and line[2] in '0123456789':
keyslot_count += 1
elif line.startswith('\t'):
pass
elif keyslot_re.match(line):
# LUKS1 header dumps have one line per keyslot with ENABLED
# or DISABLED in them. We count such lines with ENABLED.
keyslot_count += 1
else:
keyslot_area = False
if keyslot_count < 2:
self._module.fail_json(msg="LUKS device %s has less than two active keyslots. "
"To be able to remove a key, please set "
"`force_remove_last_key` to `yes`." % device)
result = self._run_command([self._cryptsetup_bin, 'luksRemoveKey', device,
'-q', '--key-file', keyfile])
if result[RETURN_CODE] != 0:
raise ValueError('Error while removing LUKS key from %s: %s'
% (device, result[STDERR]))
class ConditionsHandler(Handler):
def __init__(self, module, crypthandler):
super(ConditionsHandler, self).__init__(module)
self._crypthandler = crypthandler
self.device = self.get_device_name()
def get_device_name(self):
device = self._module.params.get('device')
label = self._module.params.get('label')
uuid = self._module.params.get('uuid')
name = self._module.params.get('name')
if device is None and label is not None:
device = self.get_device_by_label(label)
elif device is None and uuid is not None:
device = self.get_device_by_uuid(uuid)
elif device is None and name is not None:
device = self._crypthandler.get_container_device_by_name(name)
return device
def luks_create(self):
return (self.device is not None and
self._module.params['keyfile'] is not None and
self._module.params['state'] in ('present',
'opened',
'closed') and
not self._crypthandler.is_luks(self.device))
def opened_luks_name(self):
''' If luks is already opened, return its name.
If 'name' parameter is specified and differs
from obtained value, fail.
Return None otherwise
'''
if self._module.params['state'] != 'opened':
return None
# try to obtain luks name - it may be already opened
name = self._crypthandler.get_container_name_by_device(self.device)
if name is None:
# container is not open
return None
if self._module.params['name'] is None:
# container is already opened
return name
if name != self._module.params['name']:
# the container is already open but with different name:
# suspicious. back off
self._module.fail_json(msg="LUKS container is already opened "
"under different name '%s'." % name)
# container is opened and the names match
return name
def luks_open(self):
if (self._module.params['keyfile'] is None or
self.device is None or
self._module.params['state'] != 'opened'):
# conditions for open not fulfilled
return False
name = self.opened_luks_name()
if name is None:
return True
return False
def luks_close(self):
if ((self._module.params['name'] is None and self.device is None) or
self._module.params['state'] != 'closed'):
# conditions for close not fulfilled
return False
if self.device is not None:
name = self._crypthandler.get_container_name_by_device(self.device)
# successfully getting name based on device means that luks is open
luks_is_open = name is not None
if self._module.params['name'] is not None:
self.device = self._crypthandler.get_container_device_by_name(
self._module.params['name'])
# successfully getting device based on name means that luks is open
luks_is_open = self.device is not None
return luks_is_open
def luks_add_key(self):
if (self.device is None or
self._module.params['keyfile'] is None or
self._module.params['new_keyfile'] is None):
# conditions for adding a key not fulfilled
return False
if self._module.params['state'] == 'absent':
self._module.fail_json(msg="Contradiction in setup: Asking to "
"add a key to absent LUKS.")
return True
def luks_remove_key(self):
if (self.device is None or
self._module.params['remove_keyfile'] is None):
# conditions for removing a key not fulfilled
return False
if self._module.params['state'] == 'absent':
self._module.fail_json(msg="Contradiction in setup: Asking to "
"remove a key from absent LUKS.")
return True
def luks_remove(self):
return (self.device is not None and
self._module.params['state'] == 'absent' and
self._crypthandler.is_luks(self.device))
def run_module():
# available arguments/parameters that a user can pass
module_args = dict(
state=dict(type='str', default='present', choices=['present', 'absent', 'opened', 'closed']),
device=dict(type='str'),
name=dict(type='str'),
keyfile=dict(type='path'),
new_keyfile=dict(type='path'),
remove_keyfile=dict(type='path'),
force_remove_last_key=dict(type='bool', default=False),
keysize=dict(type='int'),
label=dict(type='str'),
uuid=dict(type='str'),
)
# seed the result dict in the object
result = dict(
changed=False,
name=None
)
module = AnsibleModule(argument_spec=module_args,
supports_check_mode=True)
if module.params['device'] is not None:
try:
statinfo = os.stat(module.params['device'])
mode = statinfo.st_mode
if not stat.S_ISBLK(mode) and not stat.S_ISCHR(mode):
raise Exception('{0} is not a device'.format(module.params['device']))
except Exception as e:
module.fail_json(msg=str(e))
crypt = CryptHandler(module)
conditions = ConditionsHandler(module, crypt)
# The conditions are in order to allow more operations in one run.
# (e.g. create luks and add a key to it)
# luks create
if conditions.luks_create():
if not module.check_mode:
try:
crypt.run_luks_create(conditions.device,
module.params['keyfile'],
module.params['keysize'])
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# luks open
name = conditions.opened_luks_name()
if name is not None:
result['name'] = name
if conditions.luks_open():
name = module.params['name']
if name is None:
try:
name = crypt.generate_luks_name(conditions.device)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
if not module.check_mode:
try:
crypt.run_luks_open(conditions.device,
module.params['keyfile'],
name)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['name'] = name
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# luks close
if conditions.luks_close():
if conditions.device is not None:
try:
name = crypt.get_container_name_by_device(
conditions.device)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
else:
name = module.params['name']
if not module.check_mode:
try:
crypt.run_luks_close(name)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['name'] = name
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# luks add key
if conditions.luks_add_key():
if not module.check_mode:
try:
crypt.run_luks_add_key(conditions.device,
module.params['keyfile'],
module.params['new_keyfile'])
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# luks remove key
if conditions.luks_remove_key():
if not module.check_mode:
try:
last_key = module.params['force_remove_last_key']
crypt.run_luks_remove_key(conditions.device,
module.params['remove_keyfile'],
force_remove_last_key=last_key)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# luks remove
if conditions.luks_remove():
if not module.check_mode:
try:
crypt.run_luks_remove(conditions.device)
except ValueError as e:
module.fail_json(msg="luks_device error: %s" % e)
result['changed'] = True
if module.check_mode:
module.exit_json(**result)
# Success - return result
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,128 |
hall module: the hall.com is not available
|
##### SUMMARY
hall module: the hall.com is not available
```
19 description:
20 - "The C(hall) module connects to the U(https://hall.com) messaging API and allows you to deliver notication messages to rooms."
...
24 room_token:
25 description:
26 - "Room token provided to you by setting up the Ansible room integation on U(https://hall.com)"
...
65 HALL_API_ENDPOINT = 'https://hall.com/api/1/services/generic/%s'
```
It looks like https://hall.com is not available
```
test:~$ ping hall.com
ping: hall.com: No address associated with hostname
test:~$ ping www.hall.com
ping: www.hall.com: Name or service not known
test:~$ host hall.com
hall.com mail is handled by 10 mxa.mailgun.org.
hall.com mail is handled by 20 mxb.mailgun.org.
test:~$ host www.hall.com
Host www.hall.com not found: 3(NXDOMAIN)
```
I can't find any information about this service exists nowadays or not
Therefore, the module must be removed or fixed.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
```lib/ansible/modules/notification/hall.py```
##### ANSIBLE VERSION
```
2.8
```
|
https://github.com/ansible/ansible/issues/62128
|
https://github.com/ansible/ansible/pull/62152
|
5b3526535c9e2f9e117e73d8fbea24b6f9c849ef
|
8b42de29fac4093c976478ee893845c06ed4bcd6
| 2019-09-11T09:01:53Z |
python
| 2019-09-11T19:33:03Z |
lib/ansible/modules/notification/hall.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Billy Kimble <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = """
module: hall
short_description: Send notification to Hall
description:
- "The C(hall) module connects to the U(https://hall.com) messaging API and allows you to deliver notication messages to rooms."
version_added: "2.0"
author: Billy Kimble (@bkimble) <[email protected]>
options:
room_token:
description:
- "Room token provided to you by setting up the Ansible room integation on U(https://hall.com)"
required: true
msg:
description:
- The message you wish to deliver as a notification
required: true
title:
description:
- The title of the message
required: true
picture:
description:
- >
The full URL to the image you wish to use for the Icon of the message. Defaults to
U(http://cdn2.hubspot.net/hub/330046/file-769078210-png/Official_Logos/ansible_logo_black_square_small.png?t=1421076128627)
required: false
"""
EXAMPLES = """
- name: Send Hall notifiation
hall:
room_token: <hall room integration token>
title: Nginx
msg: 'Created virtual host file on {{ inventory_hostname }}'
delegate_to: loclahost
- name: Send Hall notification if EC2 servers were created.
hall:
room_token: <hall room integration token>
title: Server Creation
msg: 'Created instance {{ item.id }} of type {{ item.instance_type }}.\\nInstance can be reached at {{ item.public_ip }} in the {{ item.region }} region.'
delegate_to: loclahost
when: ec2.instances|length > 0
with_items: '{{ ec2.instances }}'
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url
HALL_API_ENDPOINT = 'https://hall.com/api/1/services/generic/%s'
def send_request_to_hall(module, room_token, payload):
headers = {'Content-Type': 'application/json'}
payload = module.jsonify(payload)
api_endpoint = HALL_API_ENDPOINT % (room_token)
response, info = fetch_url(module, api_endpoint, data=payload, headers=headers)
if info['status'] != 200:
secure_url = HALL_API_ENDPOINT % ('[redacted]')
module.fail_json(msg=" failed to send %s to %s: %s" % (payload, secure_url, info['msg']))
def main():
module = AnsibleModule(
argument_spec=dict(
room_token=dict(type='str', required=True),
msg=dict(type='str', required=True),
title=dict(type='str', required=True),
picture=dict(type='str',
default='http://cdn2.hubspot.net/hub/330046/file-769078210-png/Official_Logos/ansible_logo_black_square_small.png?t=1421076128627'),
)
)
room_token = module.params['room_token']
message = module.params['msg']
title = module.params['title']
picture = module.params['picture']
payload = {'title': title, 'message': message, 'picture': picture}
send_request_to_hall(module, room_token, payload)
module.exit_json(msg="OK")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,627 |
junos_user encrypted_password should be set no_log
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
junos_user encrypted_password option is no set no_log.
So, I get "[WARNING]: Module did not set no_log for encrypted_password"
It be set no_log.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
junos_user module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.3
config file = /vagrant/test/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/ansible2830/lib64/python3.6/site-packages/ansible
executable location = /home/vagrant/ansible2830/bin/ansible
python version = 3.6.8 (default, May 2 2019, 20:40:44) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
$ ansible-config dump --only-changed
HOST_KEY_CHECKING(/vagrant/test/ansible.cfg) = False
RETRY_FILES_ENABLED(/vagrant/test/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
$ uname -a
Linux centos7 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
$ ansible-playbook -i inventory/inventory.ini junos_user.yml
<!--- Paste example playbooks or commands between quotes below -->
- playbook
```yaml
---
- hosts: junos
gather_facts: no
tasks:
- name: set junos user
junos_user:
name: testuser
sshkey: "{{ lookup('file', '/home/vagrant/user_keys/admin01_id_rsa.pub') }}"
role: super-user
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
$ ansible-playbook -i inventory/inventory.ini network_set_junos_user.yml
PLAY [junos] ******************************************************************************************************
TASK [set junos user] *********************************************************************************************
[WARNING]: Platform linux on host junos01 is using the discovered Python interpreter at /usr/bin/python, but
future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.
changed: [junos01]
PLAY RECAP ********************************************************************************************************
junos01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
(ansible2830) [vagrant@centos7 sec5]$ ansible-playbook -i inventory/inventory.ini network_set_junos_user.yml
PLAY [junos] ******************************************************************************************************
TASK [set junos user] *********************************************************************************************
[WARNING]: Module did not set no_log for encrypted_password
[WARNING]: Platform linux on host junos01 is using the discovered Python interpreter at /usr/bin/python, but
future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.8/reference_appendices/interpreter_discovery.html for more information.
changed: [junos01]
PLAY RECAP ********************************************************************************************************
junos01 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/60627
|
https://github.com/ansible/ansible/pull/62184
|
cd4882e2297e78e46b676663d9c899148a16cc4f
|
4e6270750a748ae854481e157821f51eda36ded0
| 2019-08-15T09:20:47Z |
python
| 2019-09-12T10:38:40Z |
lib/ansible/modules/network/junos/junos_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: junos_user
version_added: "2.3"
author: "Peter Sprygada (@privateip)"
short_description: Manage local user accounts on Juniper JUNOS devices
description:
- This module manages locally configured user accounts on remote
network devices running the JUNOS operating system. It provides
a set of arguments for creating, removing and updating locally
defined accounts
extends_documentation_fragment: junos
options:
aggregate:
description:
- The C(aggregate) argument defines a list of users to be configured
on the remote device. The list of users will be compared against
the current users and only changes will be added or removed from
the device configuration. This argument is mutually exclusive with
the name argument.
version_added: "2.4"
aliases: ['users', 'collection']
name:
description:
- The C(name) argument defines the username of the user to be created
on the system. This argument must follow appropriate usernaming
conventions for the target device running JUNOS. This argument is
mutually exclusive with the C(aggregate) argument.
full_name:
description:
- The C(full_name) argument provides the full name of the user
account to be created on the remote device. This argument accepts
any text string value.
role:
description:
- The C(role) argument defines the role of the user account on the
remote system. User accounts can have more than one role
configured.
choices: ['operator', 'read-only', 'super-user', 'unauthorized']
sshkey:
description:
- The C(sshkey) argument defines the public SSH key to be configured
for the user account on the remote system. This argument must
be a valid SSH key
encrypted_password:
description:
- The C(encrypted_password) argument set already hashed password
for the user account on the remote system.
version_added: "2.8"
purge:
description:
- The C(purge) argument instructs the module to consider the
users definition absolute. It will remove any previously configured
users on the device with the exception of the current defined
set of aggregate.
type: bool
default: 'no'
state:
description:
- The C(state) argument configures the state of the user definitions
as it relates to the device operational configuration. When set
to I(present), the user should be configured in the device active
configuration and when set to I(absent) the user should not be
in the device active configuration
default: present
choices: ['present', 'absent']
active:
description:
- Specifies whether or not the configuration is active or deactivated
type: bool
default: 'yes'
version_added: "2.4"
requirements:
- ncclient (>=v0.5.2)
notes:
- This module requires the netconf system service be enabled on
the remote device being managed.
- Tested against vSRX JUNOS version 15.1X49-D15.4, vqfx-10000 JUNOS Version 15.1X53-D60.4.
- Recommended connection is C(netconf). See L(the Junos OS Platform Options,../network/user_guide/platform_junos.html).
- This module also works with C(local) connections for legacy playbooks.
"""
EXAMPLES = """
- name: create new user account
junos_user:
name: ansible
role: super-user
sshkey: "{{ lookup('file', '~/.ssh/ansible.pub') }}"
state: present
- name: remove a user account
junos_user:
name: ansible
state: absent
- name: remove all user accounts except ansible
junos_user:
aggregate:
- name: ansible
purge: yes
- name: set user password
junos_user:
name: ansible
role: super-user
encrypted_password: "{{ 'my-password' | password_hash('sha512') }}"
state: present
- name: Create list of users
junos_user:
aggregate:
- {name: test_user1, full_name: test_user2, role: operator, state: present}
- {name: test_user2, full_name: test_user2, role: read-only, state: present}
- name: Delete list of users
junos_user:
aggregate:
- {name: test_user1, full_name: test_user2, role: operator, state: absent}
- {name: test_user2, full_name: test_user2, role: read-only, state: absent}
"""
RETURN = """
diff.prepared:
description: Configuration difference before and after applying change.
returned: when configuration is changed and diff option is enabled.
type: str
sample: >
[edit system login]
+ user test-user {
+ uid 2005;
+ class read-only;
+ }
"""
from functools import partial
from copy import deepcopy
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.common.utils import remove_default_spec
from ansible.module_utils.network.junos.junos import junos_argument_spec, get_connection, tostring
from ansible.module_utils.network.junos.junos import commit_configuration, discard_changes
from ansible.module_utils.network.junos.junos import load_config, locked_config
from ansible.module_utils.six import iteritems
try:
from lxml.etree import Element, SubElement
except ImportError:
from xml.etree.ElementTree import Element, SubElement
ROLES = ['operator', 'read-only', 'super-user', 'unauthorized']
USE_PERSISTENT_CONNECTION = True
def handle_purge(module, want):
want_users = [item['name'] for item in want]
element = Element('system')
login = SubElement(element, 'login')
conn = get_connection(module)
try:
reply = conn.execute_rpc(tostring(Element('get-configuration')), ignore_warning=False)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
users = reply.xpath('configuration/system/login/user/name')
if users:
for item in users:
name = item.text
if name not in want_users and name != 'root':
user = SubElement(login, 'user', {'operation': 'delete'})
SubElement(user, 'name').text = name
if element.xpath('/system/login/user/name'):
return element
def map_obj_to_ele(module, want):
element = Element('system')
login = SubElement(element, 'login')
for item in want:
if item['state'] != 'present':
if item['name'] == 'root':
module.fail_json(msg="cannot delete the 'root' account.")
operation = 'delete'
else:
operation = 'merge'
if item['name'] != 'root':
user = SubElement(login, 'user', {'operation': operation})
SubElement(user, 'name').text = item['name']
else:
user = auth = SubElement(element, 'root-authentication', {'operation': operation})
if operation == 'merge':
if item['name'] == 'root' and (not item['active'] or item['role'] or item['full_name']):
module.fail_json(msg="'root' account cannot be deactivated or be assigned a role and a full name")
if item['active']:
user.set('active', 'active')
else:
user.set('inactive', 'inactive')
if item['role']:
SubElement(user, 'class').text = item['role']
if item.get('full_name'):
SubElement(user, 'full-name').text = item['full_name']
if item.get('sshkey'):
if 'auth' not in locals():
auth = SubElement(user, 'authentication')
if 'ssh-rsa' in item['sshkey']:
ssh_rsa = SubElement(auth, 'ssh-rsa')
elif 'ssh-dss' in item['sshkey']:
ssh_rsa = SubElement(auth, 'ssh-dsa')
elif 'ecdsa-sha2' in item['sshkey']:
ssh_rsa = SubElement(auth, 'ssh-ecdsa')
elif 'ssh-ed25519' in item['sshkey']:
ssh_rsa = SubElement(auth, 'ssh-ed25519')
key = SubElement(ssh_rsa, 'name').text = item['sshkey']
if item.get('encrypted_password'):
if 'auth' not in locals():
auth = SubElement(user, 'authentication')
SubElement(auth, 'encrypted-password').text = item['encrypted_password']
return element
def get_param_value(key, item, module):
# if key doesn't exist in the item, get it from module.params
if not item.get(key):
value = module.params[key]
# if key does exist, do a type check on it to validate it
else:
value_type = module.argument_spec[key].get('type', 'str')
type_checker = module._CHECK_ARGUMENT_TYPES_DISPATCHER[value_type]
type_checker(item[key])
value = item[key]
# validate the param value (if validator func exists)
validator = globals().get('validate_%s' % key)
if all((value, validator)):
validator(value, module)
return value
def map_params_to_obj(module):
aggregate = module.params['aggregate']
if not aggregate:
if not module.params['name'] and module.params['purge']:
return list()
elif not module.params['name']:
module.fail_json(msg='missing required argument: name')
else:
collection = [{'name': module.params['name']}]
else:
collection = list()
for item in aggregate:
if not isinstance(item, dict):
collection.append({'username': item})
elif 'name' not in item:
module.fail_json(msg='missing required argument: name')
else:
collection.append(item)
objects = list()
for item in collection:
get_value = partial(get_param_value, item=item, module=module)
item.update({
'full_name': get_value('full_name'),
'role': get_value('role'),
'encrypted_password': get_value('encrypted_password'),
'sshkey': get_value('sshkey'),
'state': get_value('state'),
'active': get_value('active')
})
for key, value in iteritems(item):
# validate the param value (if validator func exists)
validator = globals().get('validate_%s' % key)
if all((value, validator)):
validator(value, module)
objects.append(item)
return objects
def main():
""" main entry point for module execution
"""
element_spec = dict(
name=dict(),
full_name=dict(),
role=dict(choices=ROLES),
encrypted_password=dict(),
sshkey=dict(),
state=dict(choices=['present', 'absent'], default='present'),
active=dict(type='bool', default=True)
)
aggregate_spec = deepcopy(element_spec)
aggregate_spec['name'] = dict(required=True)
# remove default in aggregate spec, to handle common arguments
remove_default_spec(aggregate_spec)
argument_spec = dict(
aggregate=dict(type='list', elements='dict', options=aggregate_spec, aliases=['collection', 'users']),
purge=dict(default=False, type='bool')
)
argument_spec.update(element_spec)
argument_spec.update(junos_argument_spec)
mutually_exclusive = [['aggregate', 'name']]
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
warnings = list()
result = {'changed': False, 'warnings': warnings}
want = map_params_to_obj(module)
ele = map_obj_to_ele(module, want)
purge_request = None
if module.params['purge']:
purge_request = handle_purge(module, want)
with locked_config(module):
if purge_request:
load_config(module, tostring(purge_request), warnings, action='replace')
diff = load_config(module, tostring(ele), warnings, action='merge')
commit = not module.check_mode
if diff:
if commit:
commit_configuration(module)
else:
discard_changes(module)
result['changed'] = True
if module._diff:
result['diff'] = {'prepared': diff}
module.exit_json(**result)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,731 |
any_errors_fatal does not work
|
##### SUMMARY
Even in the simplest playbook, the `any_errors_fatal: true` construct does not work
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
any_errors_fatal
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = ['/Users/tennis.smith/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.3/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
RETRY_FILES_ENABLED(env: ANSIBLE_RETRY_FILES_ENABLED) = False
```
##### OS / ENVIRONMENT
Any
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
ansible-playbook -i <hosts> ./create_vlan.yml --extra-vars '@vlan_103.yml'
```
<!--- Paste example playbooks or commands between quotes below -->
VLAN config file:
```yaml
---
vlan_number: 103
vlan_name: test_103
ports:
n9kv1:
- Ethernet1/88
- Ethernet1/89
- Ethernet1/90
n9kv2:
- Ethernet1/99
```
Playbook:
```yaml
- name: vlan testing
hosts: switches
any_errors_fatal: true
connection: local
gather_facts: no
tasks:
- name: Build VLAN
nxos_vlan:
vlan_id: "{{ vlan_number }}"
name: "{{ vlan_name }}"
state: present
interfaces: "{{ ports[inventory_hostname] }}"
when: inventory_hostname in ports
```
##### EXPECTED RESULTS
The playbook should fail on the first error found.
##### ACTUAL RESULTS
There are 2 errors in the output and the playbook continues to the end. It should have aborted on the first error. The error itself was reported in a separate issue (#60729).
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [vlan testing] ******************************************************************************************************************
TASK [Build VLAN] ********************************************************************************************************************
skipping: [n7ksw1]
skipping: [n7ksw2]
fatal: [n95ksw2]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "switchport\r\r\n ^\r\n% Invalid command at '^' marker.\r\n\rJER-LAB-95KSW2(config-if)# "}
fatal: [n95ksw1]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "switchport\r\r\n ^\r\n% Invalid command at '^' marker.\r\n\rJER-LAB-N95KSW1(config-if)# "}
NO MORE HOSTS LEFT *******************************************************************************************************************
PLAY RECAP ***************************************************************************************************************************
n7ksw1 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
n7ksw2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
n95ksw1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
n95ksw2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/60731
|
https://github.com/ansible/ansible/pull/62029
|
0e72cbd45114dff22dcd464529ec9918d9ecaf39
|
09f1c286e00828294802623da4a6b4880ec279a5
| 2019-08-17T14:36:21Z |
python
| 2019-09-12T20:15:02Z |
docs/docsite/rst/user_guide/playbooks_delegation.rst
|
.. _playbooks_delegation:
Delegation, Rolling Updates, and Local Actions
==============================================
.. contents:: Topics
Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts.
This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems.
Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update.
This section covers all of these features. For examples of these items in use, `please see the ansible-examples repository <https://github.com/ansible/ansible-examples/>`_. There are quite a few examples of zero-downtime update procedures for different kinds of applications.
You should also consult the :ref:`module documentation<modules_by_category>` section. Modules like :ref:`ec2_elb<ec2_elb_module>`, :ref:`nagios<nagios_module>`, :ref:`bigip_pool<bigip_pool_module>`, and other :ref:`network_modules` dovetail neatly with the concepts mentioned here.
You'll also want to read up on :ref:`playbooks_reuse_roles`, as the 'pre_task' and 'post_task' concepts are the places where you would typically call these modules.
Be aware that certain tasks are impossible to delegate, i.e. `include`, `add_host`, `debug`, etc as they always execute on the controller.
.. _rolling_update_batch_size:
Rolling Update Batch Size
`````````````````````````
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the ``serial`` keyword::
- name: test play
hosts: webservers
serial: 2
gather_facts: False
tasks:
- name: task one
command: hostname
- name: task two
command: hostname
In the above example, if we had 4 hosts in the group 'webservers', 2
would complete the play completely before moving on to the next 2 hosts::
PLAY [webservers] ****************************************
TASK [task one] ******************************************
changed: [web2]
changed: [web1]
TASK [task two] ******************************************
changed: [web1]
changed: [web2]
PLAY [webservers] ****************************************
TASK [task one] ******************************************
changed: [web3]
changed: [web4]
TASK [task two] ******************************************
changed: [web3]
changed: [web4]
PLAY RECAP ***********************************************
web1 : ok=2 changed=2 unreachable=0 failed=0
web2 : ok=2 changed=2 unreachable=0 failed=0
web3 : ok=2 changed=2 unreachable=0 failed=0
web4 : ok=2 changed=2 unreachable=0 failed=0
The ``serial`` keyword can also be specified as a percentage, which will be applied to the total number of hosts in a
play, in order to determine the number of hosts per pass::
- name: test play
hosts: webservers
serial: "30%"
If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder.
As of Ansible 2.2, the batch sizes can be specified as a list, as follows::
- name: test play
hosts: webservers
serial:
- 1
- 5
- 10
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left),
every following batch would contain 10 hosts until all available hosts are used.
It is also possible to list multiple batch sizes as percentages::
- name: test play
hosts: webservers
serial:
- "10%"
- "20%"
- "100%"
You can also mix and match the values::
- name: test play
hosts: webservers
serial:
- 1
- 5
- "20%"
.. note::
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
.. _maximum_failure_percentage:
Maximum Failure Percentage
``````````````````````````
By default, Ansible will continue executing actions as long as there are hosts in the batch that have not yet failed. The batch size for a play is determined by the ``serial`` parameter. If ``serial`` is not set, then batch size is all the hosts specified in the ``hosts:`` field.
In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a
certain threshold of failures have been reached. To achieve this, you can set a maximum failure
percentage on a play as follows::
- hosts: webservers
max_fail_percentage: 30
serial: 10
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
.. note::
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort
when 2 of the systems failed, the percentage should be set at 49 rather than 50.
.. _delegation:
Delegation
``````````
This isn't actually rolling update specific but comes up frequently in those cases.
If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task.
This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows.
Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller.
Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea::
---
- hosts: webservers
serial: 5
tasks:
- name: take out of load balancer pool
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: actual steps would go here
yum:
name: acme-web-stack
state: latest
- name: add back to load balancer pool
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
---
# ...
tasks:
- name: take out of load balancer pool
local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
# ...
- name: add back to load balancer pool
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers.
Here is an example::
---
# ...
tasks:
- name: recursively copy files from management server to target
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase.
In case you have to specify more arguments you can use the following syntax::
---
# ...
tasks:
- name: Send summary mail
local_action:
module: mail
subject: "Summary Mail"
to: "{{ mail_recipient }}"
body: "{{ mail_body }}"
run_once: True
The `ansible_host` variable (`ansible_ssh_host` in 1.x or specific to ssh/paramiko plugins) reflects the host a task is delegated to.
.. _delegate_facts:
Delegated facts
```````````````
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
The directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
loop: "{{groups['dbservers']}}"
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers.
This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once:
Run Once
````````
In some cases there may be a need to only run a task one time for a batch of hosts.
This can be achieved by configuring "run_once" on a task::
---
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true
# ...
This directive forces the task to attempt execution on the first host in the current batch and then applies all results and facts to all the hosts in the same batch.
This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
But the results are applied to all the hosts.
Like most tasks, this can be optionally paired with "delegate_to" to specify an individual host to execute on::
- command: /opt/application/upgrade_db.py
run_once: true
delegate_to: web01.example.org
As always with delegation, the action will be executed on the delegated host, but the information is still that of the original host in the task.
.. note::
When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch.
If it's crucial that the task is run only once regardless of "serial" mode, use
:code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct.
.. note::
Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested.
.. note::
If you want to avoid the default behaviour of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block.
.. _local_playbooks:
Local Playbooks
```````````````
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful
for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so::
ansible-playbook playbook.yml --connection=local
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook
use the default remote connection type::
- hosts: 127.0.0.1
connection: local
.. note::
If you set the connection to local and there is no ansible_python_interpreter set, modules will run under /usr/bin/python and not
under {{ ansible_playbook_python }}. Be sure to set ansible_python_interpreter: "{{ ansible_playbook_python }}" in
host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead.
.. _interrupt_execution_on_any_error:
Interrupt execution on any error
````````````````````````````````
With the ''any_errors_fatal'' option, any failure on any host in a multi-host play will be treated as fatal and Ansible will exit immediately without waiting for the other hosts.
Sometimes ''serial'' execution is unsuitable; the number of hosts is unpredictable (because of dynamic inventory) and speed is crucial (simultaneous execution is required), but all tasks must be 100% successful to continue playbook execution.
For example, consider a service located in many datacenters with some load balancers to pass traffic from users to the service. There is a deploy playbook to upgrade service deb-packages. The playbook has the stages:
- disable traffic on load balancers (must be turned off simultaneously)
- gracefully stop the service
- upgrade software (this step includes tests and starting the service)
- enable traffic on the load balancers (which should be turned on simultaneously)
The service can't be stopped with "alive" load balancers; they must be disabled first. Because of this, the second stage can't be played if any server failed in the first stage.
For datacenter "A", the playbook can be written this way::
---
- hosts: load_balancers_dc_a
any_errors_fatal: True
tasks:
- name: 'shutting down datacenter [ A ]'
command: /usr/bin/disable-dc
- hosts: frontends_dc_a
tasks:
- name: 'stopping service'
command: /usr/bin/stop-software
- name: 'updating software'
command: /usr/bin/upgrade-software
- hosts: load_balancers_dc_a
tasks:
- name: 'Starting datacenter [ A ]'
command: /usr/bin/enable-dc
In this example Ansible will start the software upgrade on the front ends only if all of the load balancers are successfully disabled.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
`Ansible Examples on GitHub <https://github.com/ansible/ansible-examples>`_
Many examples of full-stack deployments
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,731 |
any_errors_fatal does not work
|
##### SUMMARY
Even in the simplest playbook, the `any_errors_fatal: true` construct does not work
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
any_errors_fatal
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = None
configured module search path = ['/Users/tennis.smith/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.3/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
RETRY_FILES_ENABLED(env: ANSIBLE_RETRY_FILES_ENABLED) = False
```
##### OS / ENVIRONMENT
Any
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
ansible-playbook -i <hosts> ./create_vlan.yml --extra-vars '@vlan_103.yml'
```
<!--- Paste example playbooks or commands between quotes below -->
VLAN config file:
```yaml
---
vlan_number: 103
vlan_name: test_103
ports:
n9kv1:
- Ethernet1/88
- Ethernet1/89
- Ethernet1/90
n9kv2:
- Ethernet1/99
```
Playbook:
```yaml
- name: vlan testing
hosts: switches
any_errors_fatal: true
connection: local
gather_facts: no
tasks:
- name: Build VLAN
nxos_vlan:
vlan_id: "{{ vlan_number }}"
name: "{{ vlan_name }}"
state: present
interfaces: "{{ ports[inventory_hostname] }}"
when: inventory_hostname in ports
```
##### EXPECTED RESULTS
The playbook should fail on the first error found.
##### ACTUAL RESULTS
There are 2 errors in the output and the playbook continues to the end. It should have aborted on the first error. The error itself was reported in a separate issue (#60729).
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [vlan testing] ******************************************************************************************************************
TASK [Build VLAN] ********************************************************************************************************************
skipping: [n7ksw1]
skipping: [n7ksw2]
fatal: [n95ksw2]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "switchport\r\r\n ^\r\n% Invalid command at '^' marker.\r\n\rJER-LAB-95KSW2(config-if)# "}
fatal: [n95ksw1]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "switchport\r\r\n ^\r\n% Invalid command at '^' marker.\r\n\rJER-LAB-N95KSW1(config-if)# "}
NO MORE HOSTS LEFT *******************************************************************************************************************
PLAY RECAP ***************************************************************************************************************************
n7ksw1 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
n7ksw2 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
n95ksw1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
n95ksw2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/60731
|
https://github.com/ansible/ansible/pull/62029
|
0e72cbd45114dff22dcd464529ec9918d9ecaf39
|
09f1c286e00828294802623da4a6b4880ec279a5
| 2019-08-17T14:36:21Z |
python
| 2019-09-12T20:15:02Z |
docs/docsite/rst/user_guide/playbooks_error_handling.rst
|
Error Handling In Playbooks
===========================
.. contents:: Topics
Ansible normally has defaults that make sure to check the return codes of commands and modules and
it fails fast -- forcing an error to be dealt with unless you decide otherwise.
Sometimes a command that returns different than 0 isn't an error. Sometimes a command might not always
need to report that it 'changed' the remote system. This section describes how to change
the default behavior of Ansible for certain tasks so output and error handling behavior is
as desired.
.. _ignoring_failed_commands:
Ignoring Failed Commands
````````````````````````
Generally playbooks will stop executing any more steps on a host that has a task fail.
Sometimes, though, you want to continue on. To do so, write a task that looks like this::
- name: this will not be counted as a failure
command: /bin/false
ignore_errors: yes
Note that the above system only governs the return value of failure of the particular task,
so if you have an undefined variable used or a syntax error, it will still raise an error that users will need to address.
Note that this will not prevent failures on connection or execution issues.
This feature only works when the task must be able to run and return a value of 'failed'.
.. _resetting_unreachable:
Resetting Unreachable Hosts
```````````````````````````
.. versionadded:: 2.2
Connection failures set hosts as 'UNREACHABLE', which will remove them from the list of active hosts for the run.
To recover from these issues you can use `meta: clear_host_errors` to have all currently flagged hosts reactivated,
so subsequent tasks can try to use them again.
.. _handlers_and_failure:
Handlers and Failure
````````````````````
When a task fails on a host, handlers which were previously notified
will *not* be run on that host. This can lead to cases where an unrelated failure
can leave a host in an unexpected state. For example, a task could update
a configuration file and notify a handler to restart some service. If a
task later on in the same play fails, the service will not be restarted despite
the configuration change.
You can change this behavior with the ``--force-handlers`` command-line option,
or by including ``force_handlers: True`` in a play, or ``force_handlers = True``
in ansible.cfg. When handlers are forced, they will run when notified even
if a task fails on that host. (Note that certain errors could still prevent
the handler from running, such as a host becoming unreachable.)
.. _controlling_what_defines_failure:
Controlling What Defines Failure
````````````````````````````````
Ansible lets you define what "failure" means in each task using the ``failed_when`` conditional. As with all conditionals in Ansible, lists of multiple ``failed_when`` conditions are joined with an implicit ``and``, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator.
You may check for failure by searching for a word or phrase in the output of a command::
- name: Fail task when the command error output prints FAILED
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
or based on the return code::
- name: Fail task when both files are identical
raw: diff foo/file1 bar/file2
register: diff_cmd
failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
In previous version of Ansible, this can still be accomplished as follows::
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
ignore_errors: True
- name: fail the play if the previous command did not succeed
fail:
msg: "the command failed"
when: "'FAILED' in command_result.stderr"
You can also combine multiple conditions for failure. This task will fail if both conditions are true::
- name: Check if a file exists in temp and fail task if it does
command: ls /tmp/this_should_not_be_here
register: result
failed_when:
- result.rc == 0
- '"No such" not in result.stdout'
If you want the task to fail when only one condition is satisfied, change the ``failed_when`` definition to::
failed_when: result.rc == 0 or "No such" not in result.stdout
If you have too many conditions to fit neatly into one line, you can split it into a multi-line yaml value with ``>``::
- name: example of many failed_when conditions with OR
shell: "./myBinary"
register: ret
failed_when: >
("No such file or directory" in ret.stdout) or
(ret.stderr != '') or
(ret.rc == 10)
.. _override_the_changed_result:
Overriding The Changed Result
`````````````````````````````
When a shell/command or other module runs it will typically report
"changed" status based on whether it thinks it affected machine state.
Sometimes you will know, based on the return code
or output that it did not make any changes, and wish to override
the "changed" result such that it does not appear in report output or
does not cause handlers to fire::
tasks:
- shell: /usr/bin/billybass --mode="take me to the river"
register: bass_result
changed_when: "bass_result.rc != 2"
# this will never report 'changed' status
- shell: wall 'beep'
changed_when: False
You can also combine multiple conditions to override "changed" result::
- command: /bin/fake_command
register: result
ignore_errors: True
changed_when:
- '"ERROR" in result.stderr'
- result.rc == 2
Aborting the play
`````````````````
Sometimes it's desirable to abort the entire play on failure, not just skip remaining tasks for a host.
The ``any_errors_fatal`` play option will end the play when any tasks results in an error and stop execution of the play::
- hosts: somehosts
any_errors_fatal: true
roles:
- myrole
for finer-grained control ``max_fail_percentage`` can be used to abort the run after a given percentage of hosts has failed.
Using blocks
````````````
Most of what you can apply to a single task (with the exception of loops) can be applied at the :ref:`playbooks_blocks` level, which also makes it much easier to set data or directives common to the tasks.
Blocks also introduce the ability to handle errors in a way similar to exceptions in most programming languages.
Blocks only deal with 'failed' status of a task. A bad task definition or an unreachable host are not 'rescuable' errors::
tasks:
- name: Handle the error
block:
- debug:
msg: 'I execute normally'
- name: i force a failure
command: /bin/false
- debug:
msg: 'I never execute, due to the above task failing, :-('
rescue:
- debug:
msg: 'I caught an error, can do stuff here to fix it, :-)'
This will 'revert' the failed status of the outer ``block`` task for the run and the play will continue as if it had succeeded.
See :ref:`block_error_handling` for more examples.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Best practices in playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
changelogs/fragments/62083-vmware-internal_results.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
No notable changes
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
lib/ansible/modules/cloud/vmware/vmware_datastore_maintenancemode.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_datastore_maintenancemode
short_description: Place a datastore into maintenance mode
description:
- This module can be used to manage maintenance mode of a datastore.
author:
- "Abhijeet Kasurde (@Akasurde)"
version_added: 2.6
notes:
- Tested on vSphere 5.5, 6.0 and 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
datastore:
description:
- Name of datastore to manage.
- If C(datastore_cluster) or C(cluster_name) are not set, this parameter is required.
type: str
datastore_cluster:
description:
- Name of the datastore cluster from all child datastores to be managed.
- If C(datastore) or C(cluster_name) are not set, this parameter is required.
type: str
cluster_name:
description:
- Name of the cluster where datastore is connected to.
- If multiple datastores are connected to the given cluster, then all datastores will be managed by C(state).
- If C(datastore) or C(datastore_cluster) are not set, this parameter is required.
type: str
state:
description:
- If set to C(present), then enter datastore into maintenance mode.
- If set to C(present) and datastore is already in maintenance mode, then no action will be taken.
- If set to C(absent) and datastore is in maintenance mode, then exit maintenance mode.
- If set to C(absent) and datastore is not in maintenance mode, then no action will be taken.
choices: [ present, absent ]
default: present
required: False
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Enter datastore into Maintenance Mode
vmware_datastore_maintenancemode:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datastore: '{{ datastore_name }}'
state: present
delegate_to: localhost
- name: Enter all datastores under cluster into Maintenance Mode
vmware_datastore_maintenancemode:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
state: present
delegate_to: localhost
- name: Enter all datastores under datastore cluster into Maintenance Mode
vmware_datastore_maintenancemode:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datastore_cluster: '{{ datastore_cluster_name }}'
state: present
delegate_to: localhost
- name: Exit datastore into Maintenance Mode
vmware_datastore_maintenancemode:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datastore: '{{ datastore_name }}'
state: absent
delegate_to: localhost
'''
RETURN = '''
results:
description: Action taken for datastore
returned: always
type: dict
sample:
'''
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import (PyVmomi, vmware_argument_spec, wait_for_task,
find_cluster_by_name, get_all_objs)
from ansible.module_utils._text import to_native
class VmwareDatastoreMaintenanceMgr(PyVmomi):
def __init__(self, module):
super(VmwareDatastoreMaintenanceMgr, self).__init__(module)
datastore_name = self.params.get('datastore')
cluster_name = self.params.get('cluster_name')
datastore_cluster = self.params.get('datastore_cluster')
self.datastore_objs = []
if datastore_name:
ds = self.find_datastore_by_name(datastore_name=datastore_name)
if not ds:
self.module.fail_json(msg='Failed to find datastore "%(datastore)s".' % self.params)
self.datastore_objs = [ds]
elif cluster_name:
cluster = find_cluster_by_name(self.content, cluster_name)
if not cluster:
self.module.fail_json(msg='Failed to find cluster "%(cluster_name)s".' % self.params)
self.datastore_objs = cluster.datastore
elif datastore_cluster:
datastore_cluster_obj = get_all_objs(self.content, [vim.StoragePod])
if not datastore_cluster_obj:
self.module.fail_json(msg='Failed to find datastore cluster "%(datastore_cluster)s".' % self.params)
for datastore in datastore_cluster_obj.childEntity:
self.datastore_objs.append(datastore)
else:
self.module.fail_json(msg="Please select one of 'cluster_name', 'datastore' or 'datastore_cluster'.")
self.state = self.params.get('state')
def ensure(self):
datastore_results = dict()
change_datastore_list = []
for datastore in self.datastore_objs:
changed = False
if self.state == 'present' and datastore.summary.maintenanceMode != 'normal':
datastore_results[datastore.name] = "Datastore '%s' is already in maintenance mode." % datastore.name
break
elif self.state == 'absent' and datastore.summary.maintenanceMode == 'normal':
datastore_results[datastore.name] = "Datastore '%s' is not in maintenance mode." % datastore.name
break
try:
if self.state == 'present':
storage_replacement_result = datastore.DatastoreEnterMaintenanceMode()
task = storage_replacement_result.task
else:
task = datastore.DatastoreExitMaintenanceMode_Task()
success, result = wait_for_task(task)
if success:
changed = True
if self.state == 'present':
datastore_results[datastore.name] = "Datastore '%s' entered in maintenance mode." % datastore.name
else:
datastore_results[datastore.name] = "Datastore '%s' exited from maintenance mode." % datastore.name
except vim.fault.InvalidState as invalid_state:
if self.state == 'present':
msg = "Unable to enter datastore '%s' in" % datastore.name
else:
msg = "Unable to exit datastore '%s' from" % datastore.name
msg += " maintenance mode due to : %s" % to_native(invalid_state.msg)
self.module.fail_json(msg=msg)
except Exception as exc:
if self.state == 'present':
msg = "Unable to enter datastore '%s' in" % datastore.name
else:
msg = "Unable to exit datastore '%s' from" % datastore.name
msg += " maintenance mode due to generic exception : %s" % to_native(exc)
self.module.fail_json(msg=msg)
change_datastore_list.append(changed)
changed = False
if any(change_datastore_list):
changed = True
self.module.exit_json(changed=changed, results=datastore_results)
def main():
spec = vmware_argument_spec()
spec.update(dict(
datastore=dict(type='str', required=False),
cluster_name=dict(type='str', required=False),
datastore_cluster=dict(type='str', required=False),
state=dict(type='str', default='present', choices=['present', 'absent']),
))
module = AnsibleModule(
argument_spec=spec,
required_one_of=[
['datastore', 'cluster_name', 'datastore_cluster'],
],
)
datastore_maintenance_mgr = VmwareDatastoreMaintenanceMgr(module=module)
datastore_maintenance_mgr.ensure()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
lib/ansible/modules/cloud/vmware/vmware_host_kernel_manager.py
|
#!/usr/bin/python
# Copyright: (c) 2019, Aaron Longchamps, <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_kernel_manager
short_description: Manage kernel module options on ESXi hosts
description:
- This module can be used to manage kernel module options on ESXi hosts.
- All connected ESXi hosts in scope will be configured when specified.
- If a host is not connected at time of configuration, it will be marked as such in the output.
- Kernel module options may require a reboot to take effect which is not covered here.
- You can use M(reboot) or M(vmware_host_powerstate) module to reboot all ESXi host systems.
version_added: '2.8'
author:
- Aaron Longchamps (@alongchamps)
notes:
- Tested on vSphere 6.0
requirements:
- python >= 2.7
- PyVmomi
options:
esxi_hostname:
description:
- Name of the ESXi host to work on.
- This parameter is required if C(cluster_name) is not specified.
type: str
cluster_name:
description:
- Name of the VMware cluster to work on.
- All ESXi hosts in this cluster will be configured.
- This parameter is required if C(esxi_hostname) is not specified.
type: str
kernel_module_name:
description:
- Name of the kernel module to be configured.
required: true
type: str
kernel_module_option:
description:
- Specified configurations will be applied to the given module.
- These values are specified in key=value pairs and separated by a space when there are multiple options.
required: true
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Configure IPv6 to be off via tcpip4 kernel module
vmware_host_kernel_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
kernel_module_name: "tcpip4"
kernel_module_option: "ipv6=0"
- name: Using cluster_name, configure vmw_psp_rr options
vmware_host_kernel_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ virtual_cluster_name }}'
kernel_module_name: "vmw_psp_rr"
kernel_module_option: "maxPathsPerDevice=2"
'''
RETURN = r'''
results:
description:
- dict with information on what was changed, by ESXi host in scope.
returned: success
type: dict
sample: {
"results": {
"myhost01.example.com": {
"changed": true,
"configured_options": "ipv6=0",
"msg": "Options have been changed on the kernel module",
"original_options": "ipv6=1"
}
}
}
'''
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
class VmwareKernelManager(PyVmomi):
def __init__(self, module):
self.module = module
super(VmwareKernelManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
self.kernel_module_name = self.params.get('kernel_module_name')
self.kernel_module_option = self.params.get('kernel_module_option')
self.results = {}
if not self.hosts:
self.module.fail_json(msg="Failed to find a host system that matches the specified criteria")
# find kernel module options for a given kmod_name. If the name is not right, this will throw an exception
def get_kernel_module_option(self, host, kmod_name):
host_kernel_manager = host.configManager.kernelModuleSystem
try:
return host_kernel_manager.QueryConfiguredModuleOptionString(self.kernel_module_name)
except vim.fault.NotFound as kernel_fault:
self.module.fail_json(msg="Failed to find kernel module on host '%s'. More information: %s" % (host.name, to_native(kernel_fault.msg)))
# configure the provided kernel module with the specified options
def apply_kernel_module_option(self, host, kmod_name, kmod_option):
host_kernel_manager = host.configManager.kernelModuleSystem
if host_kernel_manager:
try:
if not self.module.check_mode:
host_kernel_manager.UpdateModuleOptionString(kmod_name, kmod_option)
except vim.fault.NotFound as kernel_fault:
self.module.fail_json(msg="Failed to find kernel module on host '%s'. More information: %s" % (host.name, to_native(kernel_fault)))
except Exception as kernel_fault:
self.module.fail_json(msg="Failed to configure kernel module for host '%s' due to: %s" % (host.name, to_native(kernel_fault)))
# evaluate our current configuration against desired options and save results
def check_host_configuration_state(self):
change_list = []
for host in self.hosts:
changed = False
msg = ""
self.results[host.name] = dict()
if host.runtime.connectionState == "connected":
host_kernel_manager = host.configManager.kernelModuleSystem
if host_kernel_manager:
# keep track of original options on the kernel module
original_options = self.get_kernel_module_option(host, self.kernel_module_name)
desired_options = self.kernel_module_option
# apply as needed, also depending on check mode
if original_options != desired_options:
changed = True
if self.module.check_mode:
msg = "Options would be changed on the kernel module"
else:
self.apply_kernel_module_option(host, self.kernel_module_name, desired_options)
msg = "Options have been changed on the kernel module"
self.results[host.name]['configured_options'] = desired_options
else:
msg = "Options are already the same"
change_list.append(changed)
self.results[host.name]['changed'] = changed
self.results[host.name]['msg'] = msg
self.results[host.name]['original_options'] = original_options
else:
msg = "No kernel module manager found on host %s - impossible to configure." % host.name
self.results[host.name]['changed'] = changed
self.results[host.name]['msg'] = msg
else:
msg = "Host %s is disconnected and cannot be changed." % host.name
self.results[host.name]['changed'] = changed
self.results[host.name]['msg'] = msg
self.module.exit_json(changed=any(change_list), results=self.results)
def main():
argument_spec = vmware_argument_spec()
# add the arguments we're going to use for this module
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
kernel_module_name=dict(type='str', required=True),
kernel_module_option=dict(type='str', required=True),
)
# make sure we have a valid target cluster_name or esxi_hostname (not both)
# and also enable check mode
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
mutually_exclusive=[
['cluster_name', 'esxi_hostname'],
],
)
vmware_host_config = VmwareKernelManager(module)
vmware_host_config.check_host_configuration_state()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
lib/ansible/modules/cloud/vmware/vmware_host_ntp.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# Copyright: (c) 2018, Christian Kotte <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_ntp
short_description: Manage NTP server configuration of an ESXi host
description:
- This module can be used to configure, add or remove NTP servers from an ESXi host.
- If C(state) is not given, the NTP servers will be configured in the exact sequence.
- User can specify an ESXi hostname or Cluster name. In case of cluster name, all ESXi hosts are updated.
version_added: '2.5'
author:
- Abhijeet Kasurde (@Akasurde)
- Christian Kotte (@ckotte)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
esxi_hostname:
description:
- Name of the host system to work with.
- This parameter is required if C(cluster_name) is not specified.
type: str
cluster_name:
description:
- Name of the cluster from which all host systems will be used.
- This parameter is required if C(esxi_hostname) is not specified.
type: str
ntp_servers:
description:
- "IP or FQDN of NTP server(s)."
- This accepts a list of NTP servers. For multiple servers, please look at the examples.
type: list
required: True
state:
description:
- "present: Add NTP server(s), if specified server(s) are absent else do nothing."
- "absent: Remove NTP server(s), if specified server(s) are present else do nothing."
- Specified NTP server(s) will be configured if C(state) isn't specified.
choices: [ present, absent ]
type: str
verbose:
description:
- Verbose output of the configuration change.
- Explains if an NTP server was added, removed, or if the NTP server sequence was changed.
type: bool
required: false
default: false
version_added: 2.8
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Configure NTP servers for an ESXi Host
vmware_host_ntp:
hostname: vcenter01.example.local
username: [email protected]
password: SuperSecretPassword
esxi_hostname: esx01.example.local
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Set NTP servers for all ESXi Host in given Cluster
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
state: present
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Set NTP servers for an ESXi Host
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
state: present
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
delegate_to: localhost
- name: Remove NTP servers for an ESXi Host
vmware_host_ntp:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
state: absent
ntp_servers:
- bad.server.ntp.org
delegate_to: localhost
'''
RETURN = r'''
results:
description: metadata about host system's NTP configuration
returned: always
type: dict
sample: {
"esx01.example.local": {
"ntp_servers_changed": ["time1.example.local", "time2.example.local", "time3.example.local", "time4.example.local"],
"ntp_servers": ["time3.example.local", "time4.example.local"],
"ntp_servers_previous": ["time1.example.local", "time2.example.local"],
},
"esx02.example.local": {
"ntp_servers_changed": ["time3.example.local"],
"ntp_servers_current": ["time1.example.local", "time2.example.local", "time3.example.local"],
"state": "present",
"ntp_servers_previous": ["time1.example.local", "time2.example.local"],
},
}
'''
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
class VmwareNtpConfigManager(PyVmomi):
"""Class to manage configured NTP servers"""
def __init__(self, module):
super(VmwareNtpConfigManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.ntp_servers = self.params.get('ntp_servers', list())
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
if not self.hosts:
self.module.fail_json(msg="Failed to find host system.")
self.results = {}
self.desired_state = self.params.get('state', None)
self.verbose = module.params.get('verbose', False)
def update_ntp_servers(self, host, ntp_servers_configured, ntp_servers_to_change, operation='overwrite'):
"""Update NTP server configuration"""
host_date_time_manager = host.configManager.dateTimeSystem
if host_date_time_manager:
# Prepare new NTP server list
if operation == 'overwrite':
new_ntp_servers = list(ntp_servers_to_change)
else:
new_ntp_servers = list(ntp_servers_configured)
if operation == 'add':
new_ntp_servers = new_ntp_servers + ntp_servers_to_change
elif operation == 'delete':
for server in ntp_servers_to_change:
if server in new_ntp_servers:
new_ntp_servers.remove(server)
# build verbose message
if self.verbose:
message = self.build_changed_message(
ntp_servers_configured,
new_ntp_servers,
ntp_servers_to_change,
operation
)
ntp_config_spec = vim.host.NtpConfig()
ntp_config_spec.server = new_ntp_servers
date_config_spec = vim.host.DateTimeConfig()
date_config_spec.ntpConfig = ntp_config_spec
try:
if not self.module.check_mode:
host_date_time_manager.UpdateDateTimeConfig(date_config_spec)
if self.verbose:
self.results[host.name]['msg'] = message
except vim.fault.HostConfigFault as config_fault:
self.module.fail_json(
msg="Failed to configure NTP for host '%s' due to : %s" %
(host.name, to_native(config_fault.msg))
)
return new_ntp_servers
def check_host_state(self):
"""Check ESXi host configuration"""
change_list = []
changed = False
for host in self.hosts:
self.results[host.name] = dict()
ntp_servers_configured, ntp_servers_to_change = self.check_ntp_servers(host=host)
# add/remove NTP servers
if self.desired_state:
self.results[host.name]['state'] = self.desired_state
if ntp_servers_to_change:
self.results[host.name]['ntp_servers_changed'] = ntp_servers_to_change
operation = 'add' if self.desired_state == 'present' else 'delete'
new_ntp_servers = self.update_ntp_servers(
host=host,
ntp_servers_configured=ntp_servers_configured,
ntp_servers_to_change=ntp_servers_to_change,
operation=operation
)
self.results[host.name]['ntp_servers_current'] = new_ntp_servers
self.results[host.name]['changed'] = True
change_list.append(True)
else:
self.results[host.name]['ntp_servers_current'] = ntp_servers_configured
if self.verbose:
self.results[host.name]['msg'] = (
"NTP servers already added" if self.desired_state == 'present'
else "NTP servers already removed"
)
self.results[host.name]['changed'] = False
change_list.append(False)
# overwrite NTP servers
else:
self.results[host.name]['ntp_servers'] = self.ntp_servers
if ntp_servers_to_change:
self.results[host.name]['ntp_servers_changed'] = self.get_differt_entries(
ntp_servers_configured,
ntp_servers_to_change
)
self.update_ntp_servers(
host=host,
ntp_servers_configured=ntp_servers_configured,
ntp_servers_to_change=ntp_servers_to_change,
operation='overwrite'
)
self.results[host.name]['changed'] = True
change_list.append(True)
else:
if self.verbose:
self.results[host.name]['msg'] = "NTP servers already configured"
self.results[host.name]['changed'] = False
change_list.append(False)
if any(change_list):
changed = True
self.module.exit_json(changed=changed, results=self.results)
def check_ntp_servers(self, host):
"""Check configured NTP servers"""
update_ntp_list = []
host_datetime_system = host.configManager.dateTimeSystem
if host_datetime_system:
ntp_servers_configured = host_datetime_system.dateTimeInfo.ntpConfig.server
# add/remove NTP servers
if self.desired_state:
for ntp_server in self.ntp_servers:
if self.desired_state == 'present' and ntp_server not in ntp_servers_configured:
update_ntp_list.append(ntp_server)
if self.desired_state == 'absent' and ntp_server in ntp_servers_configured:
update_ntp_list.append(ntp_server)
# overwrite NTP servers
else:
if ntp_servers_configured != self.ntp_servers:
for ntp_server in self.ntp_servers:
update_ntp_list.append(ntp_server)
if update_ntp_list:
self.results[host.name]['ntp_servers_previous'] = ntp_servers_configured
return ntp_servers_configured, update_ntp_list
def build_changed_message(self, ntp_servers_configured, new_ntp_servers, ntp_servers_to_change, operation):
"""Build changed message"""
check_mode = 'would be ' if self.module.check_mode else ''
if operation == 'overwrite':
# get differences
add = self.get_not_in_list_one(new_ntp_servers, ntp_servers_configured)
remove = self.get_not_in_list_one(ntp_servers_configured, new_ntp_servers)
diff_servers = list(ntp_servers_configured)
if add and remove:
for server in add:
diff_servers.append(server)
for server in remove:
diff_servers.remove(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sadded and %s %sremoved and the server sequence %schanged as well" %
(self.array_to_string(add), check_mode, self.array_to_string(remove), check_mode, check_mode)
)
else:
if new_ntp_servers != ntp_servers_configured:
message = (
"NTP server %s %sreplaced with %s" %
(self.array_to_string(remove), check_mode, self.array_to_string(add))
)
else:
message = (
"NTP server %s %sremoved and %s %sadded" %
(self.array_to_string(remove), check_mode, self.array_to_string(add), check_mode)
)
elif add:
for server in add:
diff_servers.append(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sadded and the server sequence %schanged as well" %
(self.array_to_string(add), check_mode, check_mode)
)
else:
message = "NTP server %s %sadded" % (self.array_to_string(add), check_mode)
elif remove:
for server in remove:
diff_servers.remove(server)
if new_ntp_servers != diff_servers:
message = (
"NTP server %s %sremoved and the server sequence %schanged as well" %
(self.array_to_string(remove), check_mode, check_mode)
)
else:
message = "NTP server %s %sremoved" % (self.array_to_string(remove), check_mode)
else:
message = "NTP server sequence %schanged" % check_mode
elif operation == 'add':
message = "NTP server %s %sadded" % (self.array_to_string(ntp_servers_to_change), check_mode)
elif operation == 'delete':
message = "NTP server %s %sremoved" % (self.array_to_string(ntp_servers_to_change), check_mode)
return message
@staticmethod
def get_not_in_list_one(list1, list2):
"""Return entries that ore not in list one"""
return [x for x in list1 if x not in set(list2)]
@staticmethod
def array_to_string(array):
"""Return string from array"""
if len(array) > 2:
string = (
', '.join("'{0}'".format(element) for element in array[:-1]) + ', and '
+ "'{0}'".format(str(array[-1]))
)
elif len(array) == 2:
string = ' and '.join("'{0}'".format(element) for element in array)
elif len(array) == 1:
string = "'{0}'".format(array[0])
return string
@staticmethod
def get_differt_entries(list1, list2):
"""Return different entries of two lists"""
return [a for a in list1 + list2 if (a not in list1) or (a not in list2)]
def main():
"""Main"""
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
ntp_servers=dict(type='list', required=True),
state=dict(type='str', choices=['absent', 'present']),
verbose=dict(type='bool', default=False, required=False)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
supports_check_mode=True
)
vmware_host_ntp_config = VmwareNtpConfigManager(module)
vmware_host_ntp_config.check_host_state()
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
lib/ansible/modules/cloud/vmware/vmware_host_service_manager.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_service_manager
short_description: Manage services on a given ESXi host
description:
- This module can be used to manage (start, stop, restart) services on a given ESXi host.
- If cluster_name is provided, specified service will be managed on all ESXi host belonging to that cluster.
- If specific esxi_hostname is provided, then specified service will be managed on given ESXi host only.
version_added: '2.5'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
cluster_name:
description:
- Name of the cluster.
- Service settings are applied to every ESXi host system/s in given cluster.
- If C(esxi_hostname) is not given, this parameter is required.
type: str
esxi_hostname:
description:
- ESXi hostname.
- Service settings are applied to this ESXi host system.
- If C(cluster_name) is not given, this parameter is required.
type: str
state:
description:
- Desired state of service.
- "State value 'start' and 'present' has same effect."
- "State value 'stop' and 'absent' has same effect."
choices: [ absent, present, restart, start, stop ]
type: str
default: 'start'
service_policy:
description:
- Set of valid service policy strings.
- If set C(on), then service should be started when the host starts up.
- If set C(automatic), then service should run if and only if it has open firewall ports.
- If set C(off), then Service should not be started when the host starts up.
choices: [ 'automatic', 'off', 'on' ]
type: str
service_name:
description:
- Name of Service to be managed. This is a brief identifier for the service, for example, ntpd, vxsyslogd etc.
- This value should be a valid ESXi service name.
required: True
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Start ntpd service setting for all ESXi Host in given Cluster
vmware_host_service_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
service_name: ntpd
state: present
delegate_to: localhost
- name: Start ntpd setting for an ESXi Host
vmware_host_service_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
service_name: ntpd
state: present
delegate_to: localhost
- name: Start ntpd setting for an ESXi Host with Service policy
vmware_host_service_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
service_name: ntpd
service_policy: on
state: present
delegate_to: localhost
- name: Stop ntpd setting for an ESXi Host
vmware_host_service_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
service_name: ntpd
state: absent
delegate_to: localhost
'''
RETURN = r'''#
'''
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
class VmwareServiceManager(PyVmomi):
def __init__(self, module):
super(VmwareServiceManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.options = self.params.get('options', dict())
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
self.desired_state = self.params.get('state')
self.desired_policy = self.params.get('service_policy', None)
self.service_name = self.params.get('service_name')
self.results = {}
def service_ctrl(self):
changed = False
host_service_state = []
for host in self.hosts:
actual_service_state, actual_service_policy = self.check_service_state(host=host, service_name=self.service_name)
host_service_system = host.configManager.serviceSystem
if host_service_system:
changed_state = False
self.results[host.name] = dict(service_name=self.service_name,
actual_service_state='running' if actual_service_state else 'stopped',
actual_service_policy=actual_service_policy,
desired_service_policy=self.desired_policy,
desired_service_state=self.desired_state,
error='',
)
try:
if self.desired_state in ['start', 'present']:
if not actual_service_state:
if not self.module.check_mode:
host_service_system.StartService(id=self.service_name)
changed_state = True
elif self.desired_state in ['stop', 'absent']:
if actual_service_state:
if not self.module.check_mode:
host_service_system.StopService(id=self.service_name)
changed_state = True
elif self.desired_state == 'restart':
if not self.module.check_mode:
host_service_system.RestartService(id=self.service_name)
changed_state = True
if self.desired_policy:
if actual_service_policy != self.desired_policy:
if not self.module.check_mode:
host_service_system.UpdateServicePolicy(id=self.service_name,
policy=self.desired_policy)
changed_state = True
host_service_state.append(changed_state)
self.results[host.name].update(changed=changed_state)
except (vim.fault.InvalidState, vim.fault.NotFound,
vim.fault.HostConfigFault, vmodl.fault.InvalidArgument) as e:
self.results[host.name].update(changed=False,
error=to_native(e.msg))
if any(host_service_state):
changed = True
self.module.exit_json(changed=changed, results=self.results)
def check_service_state(self, host, service_name):
host_service_system = host.configManager.serviceSystem
if host_service_system:
services = host_service_system.serviceInfo.service
for service in services:
if service.key == service_name:
return service.running, service.policy
msg = "Failed to find '%s' service on host system '%s'" % (service_name, host.name)
cluster_name = self.params.get('cluster_name', None)
if cluster_name:
msg += " located on cluster '%s'" % cluster_name
msg += ", please check if you have specified a valid ESXi service name."
self.module.fail_json(msg=msg)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
state=dict(type='str', default='start', choices=['absent', 'present', 'restart', 'start', 'stop']),
service_name=dict(type='str', required=True),
service_policy=dict(type='str', choices=['automatic', 'off', 'on']),
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
],
supports_check_mode=True
)
vmware_host_service = VmwareServiceManager(module)
vmware_host_service.service_ctrl()
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,083 |
VMware: Modules with internal 'results' key in module return
|
##### SUMMARY
The following modules throw a `[WARNING]: Found internal 'results' key in module return, renamed to 'ansible_module_results'`:
* [ ] `vmware_datastore_maintenancemode`
* [ ] `vmware_host_kernel_manager`
* [ ] `vmware_host_ntp`
* [ ] `vmware_host_service_manager`
* [ ] `vmware_tag`
I've generated the list like this:
`cd lib/ansible/modules/cloud/vmware/ && grep exit_json *.py | grep results= | cut -d ':' -f 1 | cut -d '.' -f 1`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware
##### ANSIBLE VERSION
`devel`
|
https://github.com/ansible/ansible/issues/62083
|
https://github.com/ansible/ansible/pull/62161
|
041c52d629a47e34f755f6c765c213b3a7b896e1
|
1c3effe92e747eb4eef8021032a9ed8321655a97
| 2019-09-10T16:57:53Z |
python
| 2019-09-13T04:27:04Z |
lib/ansible/modules/cloud/vmware/vmware_tag.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_tag
short_description: Manage VMware tags
description:
- This module can be used to create / delete / update VMware tags.
- Tag feature is introduced in vSphere 6 version, so this module is not supported in the earlier versions of vSphere.
- All variables and VMware object names are case sensitive.
version_added: '2.6'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
- vSphere Automation SDK
options:
tag_name:
description:
- The name of tag to manage.
required: True
type: str
tag_description:
description:
- The tag description.
- This is required only if C(state) is set to C(present).
- This parameter is ignored, when C(state) is set to C(absent).
- Process of updating tag only allows description change.
required: False
default: ''
type: str
category_id:
description:
- The unique ID generated by vCenter should be used to.
- User can get this unique ID from facts module.
required: False
type: str
state:
description:
- The state of tag.
- If set to C(present) and tag does not exists, then tag is created.
- If set to C(present) and tag exists, then tag is updated.
- If set to C(absent) and tag exists, then tag is deleted.
- If set to C(absent) and tag does not exists, no action is taken.
required: False
default: 'present'
choices: [ 'present', 'absent' ]
type: str
extends_documentation_fragment: vmware_rest_client.documentation
'''
EXAMPLES = r'''
- name: Create a tag
vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
category_id: 'urn:vmomi:InventoryServiceCategory:e785088d-6981-4b1c-9fb8-1100c3e1f742:GLOBAL'
tag_name: Sample_Tag_0002
tag_description: Sample Description
state: present
delegate_to: localhost
- name: Update tag description
vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
tag_name: Sample_Tag_0002
tag_description: Some fancy description
state: present
delegate_to: localhost
- name: Delete tag
vmware_tag:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
tag_name: Sample_Tag_0002
state: absent
delegate_to: localhost
'''
RETURN = r'''
results:
description: dictionary of tag metadata
returned: on success
type: dict
sample: {
"msg": "Tag 'Sample_Tag_0002' created.",
"tag_id": "urn:vmomi:InventoryServiceTag:bff91819-f529-43c9-80ca-1c9dfda09441:GLOBAL"
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware_rest_client import VmwareRestClient
try:
from com.vmware.vapi.std.errors_client import Error
except ImportError:
pass
class VmwareTag(VmwareRestClient):
def __init__(self, module):
super(VmwareTag, self).__init__(module)
self.global_tags = dict()
# api_client to call APIs instead of individual service
self.tag_service = self.api_client.tagging.Tag
self.tag_name = self.params.get('tag_name')
self.get_all_tags()
self.category_service = self.api_client.tagging.Category
def ensure_state(self):
"""
Manage internal states of tags
"""
desired_state = self.params.get('state')
states = {
'present': {
'present': self.state_update_tag,
'absent': self.state_create_tag,
},
'absent': {
'present': self.state_delete_tag,
'absent': self.state_unchanged,
}
}
states[desired_state][self.check_tag_status()]()
def state_create_tag(self):
"""
Create tag
"""
tag_spec = self.tag_service.CreateSpec()
tag_spec.name = self.tag_name
tag_spec.description = self.params.get('tag_description')
category_id = self.params.get('category_id', None)
if category_id is None:
self.module.fail_json(msg="'category_id' is required parameter while creating tag.")
category_found = False
for category in self.category_service.list():
category_obj = self.category_service.get(category)
if category_id == category_obj.id:
category_found = True
break
if not category_found:
self.module.fail_json(msg="Unable to find category specified using 'category_id' - %s" % category_id)
tag_spec.category_id = category_id
tag_id = ''
try:
tag_id = self.tag_service.create(tag_spec)
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
if tag_id:
self.module.exit_json(changed=True,
results=dict(msg="Tag '%s' created." % tag_spec.name,
tag_id=tag_id))
self.module.exit_json(changed=False,
results=dict(msg="No tag created", tag_id=tag_id))
def state_unchanged(self):
"""
Return unchanged state
"""
self.module.exit_json(changed=False)
def state_update_tag(self):
"""
Update tag
"""
changed = False
tag_id = self.global_tags[self.tag_name]['tag_id']
results = dict(msg="Tag %s is unchanged." % self.tag_name,
tag_id=tag_id)
tag_update_spec = self.tag_service.UpdateSpec()
tag_desc = self.global_tags[self.tag_name]['tag_description']
desired_tag_desc = self.params.get('tag_description')
if tag_desc != desired_tag_desc:
tag_update_spec.description = desired_tag_desc
try:
self.tag_service.update(tag_id, tag_update_spec)
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
results['msg'] = 'Tag %s updated.' % self.tag_name
changed = True
self.module.exit_json(changed=changed, results=results)
def state_delete_tag(self):
"""
Delete tag
"""
tag_id = self.global_tags[self.tag_name]['tag_id']
try:
self.tag_service.delete(tag_id=tag_id)
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
self.module.exit_json(changed=True,
results=dict(msg="Tag '%s' deleted." % self.tag_name,
tag_id=tag_id))
def check_tag_status(self):
"""
Check if tag exists or not
Returns: 'present' if tag found, else 'absent'
"""
ret = 'present' if self.tag_name in self.global_tags else 'absent'
return ret
def get_all_tags(self):
"""
Retrieve all tag information
"""
for tag in self.tag_service.list():
tag_obj = self.tag_service.get(tag)
self.global_tags[tag_obj.name] = dict(tag_description=tag_obj.description,
tag_used_by=tag_obj.used_by,
tag_category_id=tag_obj.category_id,
tag_id=tag_obj.id
)
def main():
argument_spec = VmwareRestClient.vmware_client_argument_spec()
argument_spec.update(
tag_name=dict(type='str', required=True),
tag_description=dict(type='str', default='', required=False),
category_id=dict(type='str', required=False),
state=dict(type='str', choices=['present', 'absent'], default='present', required=False),
)
module = AnsibleModule(argument_spec=argument_spec)
vmware_tag = VmwareTag(module)
vmware_tag.ensure_state()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,890 |
acme_certificate contains deprecated call to be removed in 2.10
|
##### SUMMARY
acme_certificate contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/crypto/acme/acme_certificate.py:1024:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/crypto/acme/acme_certificate.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61890
|
https://github.com/ansible/ansible/pull/61648
|
14bccef2c207584bf19132fbbf10ab2237746b9e
|
a0bec0bc327d29f446a031abc803b8d2cad1949f
| 2019-09-05T20:41:11Z |
python
| 2019-09-14T21:24:32Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,890 |
acme_certificate contains deprecated call to be removed in 2.10
|
##### SUMMARY
acme_certificate contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/crypto/acme/acme_certificate.py:1024:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/crypto/acme/acme_certificate.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61890
|
https://github.com/ansible/ansible/pull/61648
|
14bccef2c207584bf19132fbbf10ab2237746b9e
|
a0bec0bc327d29f446a031abc803b8d2cad1949f
| 2019-09-05T20:41:11Z |
python
| 2019-09-14T21:24:32Z |
lib/ansible/modules/crypto/acme/_letsencrypt.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate
author: "Michael Gruener (@mgruener)"
version_added: "2.2"
short_description: Create SSL/TLS certificates with the ACME protocol
description:
- "Create and renew SSL/TLS certificates with a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/). The current
implementation supports the C(http-01), C(dns-01) and C(tls-alpn-01)
challenges."
- "To use this module, it has to be executed twice. Either as two
different tasks in the same run or during two runs. Note that the output
of the first run needs to be recorded and passed to the second run as the
module argument C(data)."
- "Between these two tasks you have to fulfill the required steps for the
chosen challenge by whatever means necessary. For C(http-01) that means
creating the necessary challenge file on the destination webserver. For
C(dns-01) the necessary dns record has to be created. For C(tls-alpn-01)
the necessary certificate has to be created and served.
It is I(not) the responsibility of this module to perform these steps."
- "For details on how to fulfill these challenges, you might have to read through
L(the main ACME specification,https://tools.ietf.org/html/rfc8555#section-8)
and the L(TLS-ALPN-01 specification,https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3).
Also, consider the examples provided for this module."
- "The module includes experimental support for IP identifiers according to
the L(current ACME IP draft,https://tools.ietf.org/html/draft-ietf-acme-ip-05)."
notes:
- "At least one of C(dest) and C(fullchain_dest) must be specified."
- "This module includes basic account management functionality.
If you want to have more control over your ACME account, use the M(acme_account)
module and disable account management for this module using the C(modify_account)
option."
- "This module was called C(letsencrypt) before Ansible 2.6. The usage
did not change."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- name: ACME TLS ALPN Challenge Extension
description: The current draft specification of the C(tls-alpn-01) challenge.
link: https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05
- module: acme_challenge_cert_helper
description: Helps preparing C(tls-alpn-01) challenges.
- module: openssl_privatekey
description: Can be used to create private keys (both for certificates and accounts).
- module: openssl_csr
description: Can be used to create a Certificate Signing Request (CSR).
- module: certificate_complete_chain
description: Allows to find the root certificate for the returned fullchain.
- module: acme_certificate_revoke
description: Allows to revoke certificates.
- module: acme_account
description: Allows to create, modify or delete an ACME account.
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
account_email:
description:
- "The email address associated with this account."
- "It will be used for certificate expiration warnings."
- "Note that when C(modify_account) is not set to C(no) and you also
used the M(acme_account) module to specify more than one contact
for your account, this module will update your account and restrict
it to the (at most one) contact email address specified here."
type: str
agreement:
description:
- "URI to a terms of service document you agree to when using the
ACME v1 service at C(acme_directory)."
- Default is latest gathered from C(acme_directory) URL.
- This option will only be used when C(acme_version) is 1.
type: str
terms_agreed:
description:
- "Boolean indicating whether you agree to the terms of service document."
- "ACME servers can require this to be true."
- This option will only be used when C(acme_version) is not 1.
type: bool
default: no
version_added: "2.5"
modify_account:
description:
- "Boolean indicating whether the module should create the account if
necessary, and update its contact data."
- "Set to C(no) if you want to use the M(acme_account) module to manage
your account instead, and to avoid accidental creation of a new account
using an old key if you changed the account key with M(acme_account)."
- "If set to C(no), C(terms_agreed) and C(account_email) are ignored."
type: bool
default: yes
version_added: "2.6"
challenge:
description: The challenge to be performed.
type: str
default: 'http-01'
choices: [ 'http-01', 'dns-01', 'tls-alpn-01' ]
csr:
description:
- "File containing the CSR for the new certificate."
- "Can be created with C(openssl req ...)."
- "The CSR may contain multiple Subject Alternate Names, but each one
will lead to an individual challenge that must be fulfilled for the
CSR to be signed."
- "I(Note): the private key used to create the CSR I(must not) be the
account key. This is a bad idea from a security point of view, and
the CA should not accept the CSR. The ACME server should return an
error in this case."
type: path
required: true
aliases: ['src']
data:
description:
- "The data to validate ongoing challenges. This must be specified for
the second run of the module only."
- "The value that must be used here will be provided by a previous use
of this module. See the examples for more details."
- "Note that for ACME v2, only the C(order_uri) entry of C(data) will
be used. For ACME v1, C(data) must be non-empty to indicate the
second stage is active; all needed data will be taken from the
CSR."
- "I(Note): the C(data) option was marked as C(no_log) up to
Ansible 2.5. From Ansible 2.6 on, it is no longer marked this way
as it causes error messages to be come unusable, and C(data) does
not contain any information which can be used without having
access to the account key or which are not public anyway."
type: dict
dest:
description:
- "The destination file for the certificate."
- "Required if C(fullchain_dest) is not specified."
type: path
aliases: ['cert']
fullchain_dest:
description:
- "The destination file for the full chain (i.e. certificate followed
by chain of intermediate certificates)."
- "Required if C(dest) is not specified."
type: path
version_added: 2.5
aliases: ['fullchain']
chain_dest:
description:
- If specified, the intermediate certificate will be written to this file.
type: path
version_added: 2.5
aliases: ['chain']
remaining_days:
description:
- "The number of days the certificate must have left being valid.
If C(cert_days < remaining_days), then it will be renewed.
If the certificate is not renewed, module return values will not
include C(challenge_data)."
- "To make sure that the certificate is renewed in any case, you can
use the C(force) option."
type: int
default: 10
deactivate_authzs:
description:
- "Deactivate authentication objects (authz) after issuing a certificate,
or when issuing the certificate failed."
- "Authentication objects are bound to an account key and remain valid
for a certain amount of time, and can be used to issue certificates
without having to re-authenticate the domain. This can be a security
concern."
type: bool
default: no
version_added: 2.6
force:
description:
- Enforces the execution of the challenge and validation, even if an
existing certificate is still valid for more than C(remaining_days).
- This is especially helpful when having an updated CSR e.g. with
additional domains for which a new certificate is desired.
type: bool
default: no
version_added: 2.6
retrieve_all_alternates:
description:
- "When set to C(yes), will retrieve all alternate chains offered by the ACME CA.
These will not be written to disk, but will be returned together with the main
chain as C(all_chains). See the documentation for the C(all_chains) return
value for details."
type: bool
default: no
version_added: "2.9"
'''
EXAMPLES = r'''
### Example with HTTP challenge ###
- name: Create a challenge for sample.com using a account key from a variable.
acme_certificate:
account_key_content: "{{ account_private_key }}"
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key from hashi vault.
acme_certificate:
account_key_content: "{{ lookup('hashi_vault', 'secret=secret/account_private_key:value') }}"
csr: /etc/pki/cert/csr/sample.com.csr
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - copy:
# dest: /var/www/html/{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource'] }}
# content: "{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource_value'] }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
chain_dest: /etc/httpd/ssl/sample.com-intermediate.crt
data: "{{ sample_com_challenge }}"
### Example with DNS challenge against production ACME server ###
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
# Renew if the certificate is at least 30 days old
remaining_days: 60
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - route53:
# zone: sample.com
# record: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].record }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: route53 requires TXT entries to be enclosed in quotes
# value: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].resource_value | regex_replace('^(.*)$', '\"\\1\"') }}"
# when: sample_com_challenge is changed
#
# Alternative way:
#
# - route53:
# zone: sample.com
# record: "{{ item.key }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: item.value is a list of TXT entries, and route53
# # requires every entry to be enclosed in quotes
# value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
# loop: "{{ sample_com_challenge.challenge_data_dns | dictsort }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
fullchain: /etc/httpd/ssl/sample.com-fullchain.crt
chain: /etc/httpd/ssl/sample.com-intermediate.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ sample_com_challenge }}"
when: sample_com_challenge is changed
'''
RETURN = '''
cert_days:
description: The number of days the certificate remains valid.
returned: success
type: int
challenge_data:
description:
- Per identifier / challenge type challenge data.
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: complex
contains:
resource:
description: The challenge resource that must be created for validation.
returned: changed
type: str
sample: .well-known/acme-challenge/evaGxfADs6pSRb2LAv9IZf17Dt3juxGJ-PCt92wr-oA
resource_original:
description:
- The original challenge resource including type identifier for C(tls-alpn-01)
challenges.
returned: changed and challenge is C(tls-alpn-01)
type: str
sample: DNS:example.com
version_added: "2.8"
resource_value:
description:
- The value the resource has to produce for the validation.
- For C(http-01) and C(dns-01) challenges, the value can be used as-is.
- "For C(tls-alpn-01) challenges, note that this return value contains a
Base64 encoded version of the correct binary blob which has to be put
into the acmeValidation x509 extension; see
U(https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3)
for details. To do this, you might need the C(b64decode) Jinja filter
to extract the binary blob from this return value."
returned: changed
type: str
sample: IlirfxKKXA...17Dt3juxGJ-PCt92wr-oA
record:
description: The full DNS record's name for the challenge.
returned: changed and challenge is C(dns-01)
type: str
sample: _acme-challenge.example.com
version_added: "2.5"
challenge_data_dns:
description:
- List of TXT values per DNS record, in case challenge is C(dns-01).
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: dict
version_added: "2.5"
authorizations:
description: ACME authorization data.
returned: changed
type: complex
contains:
authorization:
description: ACME authorization object. See U(https://tools.ietf.org/html/rfc8555#section-7.1.4)
returned: success
type: dict
order_uri:
description: ACME order URI.
returned: changed
type: str
version_added: "2.5"
finalization_uri:
description: ACME finalization URI.
returned: changed
type: str
version_added: "2.5"
account_uri:
description: ACME account URI.
returned: changed
type: str
version_added: "2.5"
all_chains:
description:
- When I(retrieve_all_alternates) is set to C(yes), the module will query the ACME server
for alternate chains. This return value will contain a list of all chains returned,
the first entry being the main chain returned by the server.
- See L(Section 7.4.2 of RFC8555,https://tools.ietf.org/html/rfc8555#section-7.4.2) for details.
returned: when certificate was retrieved and I(retrieve_all_alternates) is set to C(yes)
type: list
contains:
cert:
description:
- The leaf certificate itself, in PEM format.
type: str
returned: always
chain:
description:
- The certificate chain, excluding the root, as concatenated PEM certificates.
type: str
returned: always
full_chain:
description:
- The certificate chain, excluding the root, but including the leaf certificate,
as concatenated PEM certificates.
type: str
returned: always
'''
from ansible.module_utils.acme import (
ModuleFailException,
write_file, nopad_b64, pem_to_der,
ACMEAccount,
HAS_CURRENT_CRYPTOGRAPHY,
cryptography_get_csr_identifiers,
openssl_get_csr_identifiers,
cryptography_get_cert_days,
set_crypto_backend,
process_links,
)
import base64
import hashlib
import locale
import os
import re
import textwrap
import time
import urllib
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat import ipaddress as compat_ipaddress
def get_cert_days(module, cert_file):
'''
Return the days the certificate in cert_file remains valid and -1
if the file was not found. If cert_file contains more than one
certificate, only the first one will be considered.
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_cert_days(module, cert_file)
if not os.path.exists(cert_file):
return -1
openssl_bin = module.get_bin_path('openssl', True)
openssl_cert_cmd = [openssl_bin, "x509", "-in", cert_file, "-noout", "-text"]
dummy, out, dummy = module.run_command(openssl_cert_cmd, check_rc=True, encoding=None)
try:
not_after_str = re.search(r"\s+Not After\s*:\s+(.*)", out.decode('utf8')).group(1)
not_after = datetime.fromtimestamp(time.mktime(time.strptime(not_after_str, '%b %d %H:%M:%S %Y %Z')))
except AttributeError:
raise ModuleFailException("No 'Not after' date found in {0}".format(cert_file))
except ValueError:
raise ModuleFailException("Failed to parse 'Not after' date of {0}".format(cert_file))
now = datetime.utcnow()
return (not_after - now).days
class ACMEClient(object):
'''
ACME client class. Uses an ACME account object and a CSR to
start and validate ACME challenges and download the respective
certificates.
'''
def __init__(self, module):
self.module = module
self.version = module.params['acme_version']
self.challenge = module.params['challenge']
self.csr = module.params['csr']
self.dest = module.params.get('dest')
self.fullchain_dest = module.params.get('fullchain_dest')
self.chain_dest = module.params.get('chain_dest')
self.account = ACMEAccount(module)
self.directory = self.account.directory
self.data = module.params['data']
self.authorizations = None
self.cert_days = -1
self.order_uri = self.data.get('order_uri') if self.data else None
self.finalize_uri = None
# Make sure account exists
modify_account = module.params['modify_account']
if modify_account or self.version > 1:
contact = []
if module.params['account_email']:
contact.append('mailto:' + module.params['account_email'])
created, account_data = self.account.setup_account(
contact,
agreement=module.params.get('agreement'),
terms_agreed=module.params.get('terms_agreed'),
allow_creation=modify_account,
)
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
updated = False
if not created and account_data and modify_account:
updated, account_data = self.account.update_account(account_data, contact)
self.changed = created or updated
else:
# This happens if modify_account is False and the ACME v1
# protocol is used. In this case, we do not call setup_account()
# to avoid accidental creation of an account. This is OK
# since for ACME v1, the account URI is not needed to send a
# signed ACME request.
pass
if not os.path.exists(self.csr):
raise ModuleFailException("CSR %s not found" % (self.csr))
self._openssl_bin = module.get_bin_path('openssl', True)
# Extract list of identifiers from CSR
self.identifiers = self._get_csr_identifiers()
def _get_csr_identifiers(self):
'''
Parse the CSR and return the list of requested identifiers
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_csr_identifiers(self.module, self.csr)
else:
return openssl_get_csr_identifiers(self._openssl_bin, self.module, self.csr)
def _add_or_update_auth(self, identifier_type, identifier, auth):
'''
Add or update the given authorization in the global authorizations list.
Return True if the auth was updated/added and False if no change was
necessary.
'''
if self.authorizations.get(identifier_type + ':' + identifier) == auth:
return False
self.authorizations[identifier_type + ':' + identifier] = auth
return True
def _new_authz_v1(self, identifier_type, identifier):
'''
Create a new authorization for the given identifier.
Return the authorization object of the new authorization
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.4
'''
if self.account.uri is None:
return
new_authz = {
"resource": "new-authz",
"identifier": {"type": identifier_type, "value": identifier},
}
result, info = self.account.send_signed_request(self.directory['new-authz'], new_authz)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error requesting challenges: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
result['uri'] = info['location']
return result
def _get_challenge_data(self, auth, identifier_type, identifier):
'''
Returns a dict with the data for all proposed (and supported) challenges
of the given authorization.
'''
data = {}
# no need to choose a specific challenge here as this module
# is not responsible for fulfilling the challenges. Calculate
# and return the required information for each challenge.
for challenge in auth['challenges']:
challenge_type = challenge['type']
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
if challenge_type == 'http-01':
# https://tools.ietf.org/html/rfc8555#section-8.3
resource = '.well-known/acme-challenge/' + token
data[challenge_type] = {'resource': resource, 'resource_value': keyauthorization}
elif challenge_type == 'dns-01':
if identifier_type != 'dns':
continue
# https://tools.ietf.org/html/rfc8555#section-8.4
resource = '_acme-challenge'
value = nopad_b64(hashlib.sha256(to_bytes(keyauthorization)).digest())
record = (resource + identifier[1:]) if identifier.startswith('*.') else (resource + '.' + identifier)
data[challenge_type] = {'resource': resource, 'resource_value': value, 'record': record}
elif challenge_type == 'tls-alpn-01':
# https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3
if identifier_type == 'ip':
# IPv4/IPv6 address: use reverse mapping (RFC1034, RFC3596)
resource = compat_ipaddress.ip_address(identifier).reverse_pointer
if not resource.endswith('.'):
resource += '.'
else:
resource = identifier
value = base64.b64encode(hashlib.sha256(to_bytes(keyauthorization)).digest())
data[challenge_type] = {'resource': resource, 'resource_original': identifier_type + ':' + identifier, 'resource_value': value}
else:
continue
return data
def _fail_challenge(self, identifier_type, identifier, auth, error):
'''
Aborts with a specific error for a challenge.
'''
error_details = ''
# multiple challenges could have failed at this point, gather error
# details for all of them before failing
for challenge in auth['challenges']:
if challenge['status'] == 'invalid':
error_details += ' CHALLENGE: {0}'.format(challenge['type'])
if 'error' in challenge:
error_details += ' DETAILS: {0};'.format(challenge['error']['detail'])
else:
error_details += ';'
raise ModuleFailException("{0}: {1}".format(error.format(identifier_type + ':' + identifier), error_details))
def _validate_challenges(self, identifier_type, identifier, auth):
'''
Validate the authorization provided in the auth dict. Returns True
when the validation was successful and False when it was not.
'''
for challenge in auth['challenges']:
if self.challenge != challenge['type']:
continue
uri = challenge['uri'] if self.version == 1 else challenge['url']
challenge_response = {}
if self.version == 1:
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
challenge_response["resource"] = "challenge"
challenge_response["keyAuthorization"] = keyauthorization
result, info = self.account.send_signed_request(uri, challenge_response)
if info['status'] not in [200, 202]:
raise ModuleFailException("Error validating challenge: CODE: {0} RESULT: {1}".format(info['status'], result))
status = ''
while status not in ['valid', 'invalid', 'revoked']:
result, dummy = self.account.get_request(auth['uri'])
result['uri'] = auth['uri']
if self._add_or_update_auth(identifier_type, identifier, result):
self.changed = True
# https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.1.2
# "status (required, string): ...
# If this field is missing, then the default value is "pending"."
if self.version == 1 and 'status' not in result:
status = 'pending'
else:
status = result['status']
time.sleep(2)
if status == 'invalid':
self._fail_challenge(identifier_type, identifier, result, 'Authorization for {0} returned invalid')
return status == 'valid'
def _finalize_cert(self):
'''
Create a new certificate based on the csr.
Return the certificate object as dict
https://tools.ietf.org/html/rfc8555#section-7.4
'''
csr = pem_to_der(self.csr)
new_cert = {
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.finalize_uri, new_cert)
if info['status'] not in [200]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
status = result['status']
while status not in ['valid', 'invalid']:
time.sleep(2)
result, dummy = self.account.get_request(self.order_uri)
status = result['status']
if status != 'valid':
raise ModuleFailException("Error new cert: CODE: {0} STATUS: {1} RESULT: {2}".format(info['status'], status, result))
return result['certificate']
def _der_to_pem(self, der_cert):
'''
Convert the DER format certificate in der_cert to a PEM format
certificate and return it.
'''
return """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(der_cert).decode('utf8'), 64)))
def _download_cert(self, url):
'''
Download and parse the certificate chain.
https://tools.ietf.org/html/rfc8555#section-7.4.2
'''
content, info = self.account.get_request(url, parse_json_result=False, headers={'Accept': 'application/pem-certificate-chain'})
if not content or not info['content-type'].startswith('application/pem-certificate-chain'):
raise ModuleFailException("Cannot download certificate chain from {0}: {1} (headers: {2})".format(url, content, info))
cert = None
chain = []
# Parse data
lines = content.decode('utf-8').splitlines(True)
current = []
for line in lines:
if line.strip():
current.append(line)
if line.startswith('-----END CERTIFICATE-----'):
if cert is None:
cert = ''.join(current)
else:
chain.append(''.join(current))
current = []
alternates = []
def f(link, relation):
if relation == 'up':
# Process link-up headers if there was no chain in reply
if not chain:
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.append(self._der_to_pem(chain_result))
elif relation == 'alternate':
alternates.append(link)
process_links(info, f)
if cert is None or current:
raise ModuleFailException("Failed to parse certificate chain download from {0}: {1} (headers: {2})".format(url, content, info))
return {'cert': cert, 'chain': chain, 'alternates': alternates}
def _new_cert_v1(self):
'''
Create a new certificate based on the CSR (ACME v1 protocol).
Return the certificate object as dict
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.5
'''
csr = pem_to_der(self.csr)
new_cert = {
"resource": "new-cert",
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.directory['new-cert'], new_cert)
chain = []
def f(link, relation):
if relation == 'up':
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.clear()
chain.append(self._der_to_pem(chain_result))
process_links(info, f)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
return {'cert': self._der_to_pem(result), 'uri': info['location'], 'chain': chain}
def _new_order_v2(self):
'''
Start a new certificate order (ACME v2 protocol).
https://tools.ietf.org/html/rfc8555#section-7.4
'''
identifiers = []
for identifier_type, identifier in self.identifiers:
identifiers.append({
'type': identifier_type,
'value': identifier,
})
new_order = {
"identifiers": identifiers
}
result, info = self.account.send_signed_request(self.directory['newOrder'], new_order)
if info['status'] not in [201]:
raise ModuleFailException("Error new order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.order_uri = info['location']
self.finalize_uri = result['finalize']
def is_first_step(self):
'''
Return True if this is the first execution of this module, i.e. if a
sufficient data object from a first run has not been provided.
'''
if self.data is None:
return True
if self.version == 1:
# As soon as self.data is a non-empty object, we are in the second stage.
return not self.data
else:
# We are in the second stage if data.order_uri is given (which has been
# stored in self.order_uri by the constructor).
return self.order_uri is None
def start_challenges(self):
'''
Create new authorizations for all identifiers of the CSR,
respectively start a new order for ACME v2.
'''
self.authorizations = {}
if self.version == 1:
for identifier_type, identifier in self.identifiers:
if identifier_type != 'dns':
raise ModuleFailException('ACME v1 only supports DNS identifiers!')
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
self._new_order_v2()
self.changed = True
def get_challenges_data(self):
'''
Get challenge details for the chosen challenge type.
Return a tuple of generic challenge details, and specialized DNS challenge details.
'''
# Get general challenge data
data = {}
for type_identifier, auth in self.authorizations.items():
identifier_type, identifier = type_identifier.split(':', 1)
auth = self.authorizations[type_identifier]
# Skip valid authentications: their challenges are already valid
# and do not need to be returned
if auth['status'] == 'valid':
continue
# We drop the type from the key to preserve backwards compatibility
data[identifier] = self._get_challenge_data(auth, identifier_type, identifier)
# Get DNS challenge data
data_dns = {}
if self.challenge == 'dns-01':
for identifier, challenges in data.items():
if self.challenge in challenges:
values = data_dns.get(challenges[self.challenge]['record'])
if values is None:
values = []
data_dns[challenges[self.challenge]['record']] = values
values.append(challenges[self.challenge]['resource_value'])
return data, data_dns
def finish_challenges(self):
'''
Verify challenges for all identifiers of the CSR.
'''
self.authorizations = {}
# Step 1: obtain challenge information
if self.version == 1:
# For ACME v1, we attempt to create new authzs. Existing ones
# will be returned instead.
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
# For ACME v2, we obtain the order object by fetching the
# order URI, and extract the information from there.
result, info = self.account.get_request(self.order_uri)
if not result:
raise ModuleFailException("Cannot download order from {0}: {1} (headers: {2})".format(self.order_uri, result, info))
if info['status'] not in [200]:
raise ModuleFailException("Error on downloading order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.finalize_uri = result['finalize']
# Step 2: validate challenges
for type_identifier, auth in self.authorizations.items():
if auth['status'] == 'pending':
identifier_type, identifier = type_identifier.split(':', 1)
self._validate_challenges(identifier_type, identifier, auth)
def get_certificate(self):
'''
Request a new certificate and write it to the destination file.
First verifies whether all authorizations are valid; if not, aborts
with an error.
'''
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None:
raise ModuleFailException('Found no authorization information for "{0}"!'.format(identifier_type + ':' + identifier))
if 'status' not in auth:
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned no status')
if auth['status'] != 'valid':
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned status ' + str(auth['status']))
if self.version == 1:
cert = self._new_cert_v1()
else:
cert_uri = self._finalize_cert()
cert = self._download_cert(cert_uri)
if self.module.params['retrieve_all_alternates']:
alternate_chains = []
for alternate in cert['alternates']:
try:
alt_cert = self._download_cert(alternate)
except ModuleFailException as e:
self.module.warn('Error while downloading alternative certificate {0}: {1}'.format(alternate, e))
continue
alternate_chains.append(alt_cert)
self.all_chains = []
def _append_all_chains(cert_data):
self.all_chains.append(dict(
cert=cert_data['cert'].encode('utf8'),
chain=("\n".join(cert_data.get('chain', []))).encode('utf8'),
full_chain=(cert_data['cert'] + "\n".join(cert_data.get('chain', []))).encode('utf8'),
))
_append_all_chains(cert)
for alt_chain in alternate_chains:
_append_all_chains(alt_chain)
if cert['cert'] is not None:
pem_cert = cert['cert']
chain = [link for link in cert.get('chain', [])]
if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):
self.cert_days = get_cert_days(self.module, self.dest)
self.changed = True
if self.fullchain_dest and write_file(self.module, self.fullchain_dest, (pem_cert + "\n".join(chain)).encode('utf8')):
self.cert_days = get_cert_days(self.module, self.fullchain_dest)
self.changed = True
if self.chain_dest and write_file(self.module, self.chain_dest, ("\n".join(chain)).encode('utf8')):
self.changed = True
def deactivate_authzs(self):
'''
Deactivates all valid authz's. Does not raise exceptions.
https://community.letsencrypt.org/t/authorization-deactivation/19860/2
https://tools.ietf.org/html/rfc8555#section-7.5.2
'''
authz_deactivate = {
'status': 'deactivated'
}
if self.version == 1:
authz_deactivate['resource'] = 'authz'
if self.authorizations:
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None or auth.get('status') != 'valid':
continue
try:
result, info = self.account.send_signed_request(auth['uri'], authz_deactivate)
if 200 <= info['status'] < 300 and result.get('status') == 'deactivated':
auth['status'] = 'deactivated'
except Exception as dummy:
# Ignore errors on deactivating authzs
pass
if auth.get('status') != 'deactivated':
self.module.warn(warning='Could not deactivate authz object {0}.'.format(auth['uri']))
def main():
module = AnsibleModule(
argument_spec=dict(
account_key_src=dict(type='path', aliases=['account_key']),
account_key_content=dict(type='str', no_log=True),
account_uri=dict(type='str'),
modify_account=dict(type='bool', default=True),
acme_directory=dict(type='str', default='https://acme-staging.api.letsencrypt.org/directory'),
acme_version=dict(type='int', default=1, choices=[1, 2]),
validate_certs=dict(default=True, type='bool'),
account_email=dict(type='str'),
agreement=dict(type='str'),
terms_agreed=dict(type='bool', default=False),
challenge=dict(type='str', default='http-01', choices=['http-01', 'dns-01', 'tls-alpn-01']),
csr=dict(type='path', required=True, aliases=['src']),
data=dict(type='dict'),
dest=dict(type='path', aliases=['cert']),
fullchain_dest=dict(type='path', aliases=['fullchain']),
chain_dest=dict(type='path', aliases=['chain']),
remaining_days=dict(type='int', default=10),
deactivate_authzs=dict(type='bool', default=False),
force=dict(type='bool', default=False),
retrieve_all_alternates=dict(type='bool', default=False),
select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'openssl', 'cryptography']),
),
required_one_of=(
['account_key_src', 'account_key_content'],
['dest', 'fullchain_dest'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content'],
),
supports_check_mode=True,
)
if module._name == 'letsencrypt':
module.deprecate("The 'letsencrypt' module is being renamed 'acme_certificate'", version='2.10')
set_crypto_backend(module)
# AnsibleModule() changes the locale, so change it back to C because we rely on time.strptime() when parsing certificate dates.
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
locale.setlocale(locale.LC_ALL, 'C')
if not module.params.get('validate_certs'):
module.warn(warning='Disabling certificate validation for communications with ACME endpoint. ' +
'This should only be done for testing against a local ACME server for ' +
'development purposes, but *never* for production purposes.')
try:
if module.params.get('dest'):
cert_days = get_cert_days(module, module.params['dest'])
else:
cert_days = get_cert_days(module, module.params['fullchain_dest'])
if module.params['force'] or cert_days < module.params['remaining_days']:
# If checkmode is active, base the changed state solely on the status
# of the certificate file as all other actions (accessing an account, checking
# the authorization status...) would lead to potential changes of the current
# state
if module.check_mode:
module.exit_json(changed=True, authorizations={}, challenge_data={}, cert_days=cert_days)
else:
client = ACMEClient(module)
client.cert_days = cert_days
other = dict()
if client.is_first_step():
# First run: start challenges / start new order
client.start_challenges()
else:
# Second run: finish challenges, and get certificate
try:
client.finish_challenges()
client.get_certificate()
if module.params['retrieve_all_alternates']:
other['all_chains'] = client.all_chains
finally:
if module.params['deactivate_authzs']:
client.deactivate_authzs()
data, data_dns = client.get_challenges_data()
auths = dict()
for k, v in client.authorizations.items():
# Remove "type:" from key
auths[k.split(':', 1)[1]] = v
module.exit_json(
changed=client.changed,
authorizations=auths,
finalize_uri=client.finalize_uri,
order_uri=client.order_uri,
account_uri=client.account.uri,
challenge_data=data,
challenge_data_dns=data_dns,
cert_days=client.cert_days,
**other
)
else:
module.exit_json(changed=False, cert_days=cert_days)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,890 |
acme_certificate contains deprecated call to be removed in 2.10
|
##### SUMMARY
acme_certificate contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/crypto/acme/acme_certificate.py:1024:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/crypto/acme/acme_certificate.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61890
|
https://github.com/ansible/ansible/pull/61648
|
14bccef2c207584bf19132fbbf10ab2237746b9e
|
a0bec0bc327d29f446a031abc803b8d2cad1949f
| 2019-09-05T20:41:11Z |
python
| 2019-09-14T21:24:32Z |
lib/ansible/modules/crypto/acme/_letsencrypt.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate
author: "Michael Gruener (@mgruener)"
version_added: "2.2"
short_description: Create SSL/TLS certificates with the ACME protocol
description:
- "Create and renew SSL/TLS certificates with a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/). The current
implementation supports the C(http-01), C(dns-01) and C(tls-alpn-01)
challenges."
- "To use this module, it has to be executed twice. Either as two
different tasks in the same run or during two runs. Note that the output
of the first run needs to be recorded and passed to the second run as the
module argument C(data)."
- "Between these two tasks you have to fulfill the required steps for the
chosen challenge by whatever means necessary. For C(http-01) that means
creating the necessary challenge file on the destination webserver. For
C(dns-01) the necessary dns record has to be created. For C(tls-alpn-01)
the necessary certificate has to be created and served.
It is I(not) the responsibility of this module to perform these steps."
- "For details on how to fulfill these challenges, you might have to read through
L(the main ACME specification,https://tools.ietf.org/html/rfc8555#section-8)
and the L(TLS-ALPN-01 specification,https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3).
Also, consider the examples provided for this module."
- "The module includes experimental support for IP identifiers according to
the L(current ACME IP draft,https://tools.ietf.org/html/draft-ietf-acme-ip-05)."
notes:
- "At least one of C(dest) and C(fullchain_dest) must be specified."
- "This module includes basic account management functionality.
If you want to have more control over your ACME account, use the M(acme_account)
module and disable account management for this module using the C(modify_account)
option."
- "This module was called C(letsencrypt) before Ansible 2.6. The usage
did not change."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- name: ACME TLS ALPN Challenge Extension
description: The current draft specification of the C(tls-alpn-01) challenge.
link: https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05
- module: acme_challenge_cert_helper
description: Helps preparing C(tls-alpn-01) challenges.
- module: openssl_privatekey
description: Can be used to create private keys (both for certificates and accounts).
- module: openssl_csr
description: Can be used to create a Certificate Signing Request (CSR).
- module: certificate_complete_chain
description: Allows to find the root certificate for the returned fullchain.
- module: acme_certificate_revoke
description: Allows to revoke certificates.
- module: acme_account
description: Allows to create, modify or delete an ACME account.
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
account_email:
description:
- "The email address associated with this account."
- "It will be used for certificate expiration warnings."
- "Note that when C(modify_account) is not set to C(no) and you also
used the M(acme_account) module to specify more than one contact
for your account, this module will update your account and restrict
it to the (at most one) contact email address specified here."
type: str
agreement:
description:
- "URI to a terms of service document you agree to when using the
ACME v1 service at C(acme_directory)."
- Default is latest gathered from C(acme_directory) URL.
- This option will only be used when C(acme_version) is 1.
type: str
terms_agreed:
description:
- "Boolean indicating whether you agree to the terms of service document."
- "ACME servers can require this to be true."
- This option will only be used when C(acme_version) is not 1.
type: bool
default: no
version_added: "2.5"
modify_account:
description:
- "Boolean indicating whether the module should create the account if
necessary, and update its contact data."
- "Set to C(no) if you want to use the M(acme_account) module to manage
your account instead, and to avoid accidental creation of a new account
using an old key if you changed the account key with M(acme_account)."
- "If set to C(no), C(terms_agreed) and C(account_email) are ignored."
type: bool
default: yes
version_added: "2.6"
challenge:
description: The challenge to be performed.
type: str
default: 'http-01'
choices: [ 'http-01', 'dns-01', 'tls-alpn-01' ]
csr:
description:
- "File containing the CSR for the new certificate."
- "Can be created with C(openssl req ...)."
- "The CSR may contain multiple Subject Alternate Names, but each one
will lead to an individual challenge that must be fulfilled for the
CSR to be signed."
- "I(Note): the private key used to create the CSR I(must not) be the
account key. This is a bad idea from a security point of view, and
the CA should not accept the CSR. The ACME server should return an
error in this case."
type: path
required: true
aliases: ['src']
data:
description:
- "The data to validate ongoing challenges. This must be specified for
the second run of the module only."
- "The value that must be used here will be provided by a previous use
of this module. See the examples for more details."
- "Note that for ACME v2, only the C(order_uri) entry of C(data) will
be used. For ACME v1, C(data) must be non-empty to indicate the
second stage is active; all needed data will be taken from the
CSR."
- "I(Note): the C(data) option was marked as C(no_log) up to
Ansible 2.5. From Ansible 2.6 on, it is no longer marked this way
as it causes error messages to be come unusable, and C(data) does
not contain any information which can be used without having
access to the account key or which are not public anyway."
type: dict
dest:
description:
- "The destination file for the certificate."
- "Required if C(fullchain_dest) is not specified."
type: path
aliases: ['cert']
fullchain_dest:
description:
- "The destination file for the full chain (i.e. certificate followed
by chain of intermediate certificates)."
- "Required if C(dest) is not specified."
type: path
version_added: 2.5
aliases: ['fullchain']
chain_dest:
description:
- If specified, the intermediate certificate will be written to this file.
type: path
version_added: 2.5
aliases: ['chain']
remaining_days:
description:
- "The number of days the certificate must have left being valid.
If C(cert_days < remaining_days), then it will be renewed.
If the certificate is not renewed, module return values will not
include C(challenge_data)."
- "To make sure that the certificate is renewed in any case, you can
use the C(force) option."
type: int
default: 10
deactivate_authzs:
description:
- "Deactivate authentication objects (authz) after issuing a certificate,
or when issuing the certificate failed."
- "Authentication objects are bound to an account key and remain valid
for a certain amount of time, and can be used to issue certificates
without having to re-authenticate the domain. This can be a security
concern."
type: bool
default: no
version_added: 2.6
force:
description:
- Enforces the execution of the challenge and validation, even if an
existing certificate is still valid for more than C(remaining_days).
- This is especially helpful when having an updated CSR e.g. with
additional domains for which a new certificate is desired.
type: bool
default: no
version_added: 2.6
retrieve_all_alternates:
description:
- "When set to C(yes), will retrieve all alternate chains offered by the ACME CA.
These will not be written to disk, but will be returned together with the main
chain as C(all_chains). See the documentation for the C(all_chains) return
value for details."
type: bool
default: no
version_added: "2.9"
'''
EXAMPLES = r'''
### Example with HTTP challenge ###
- name: Create a challenge for sample.com using a account key from a variable.
acme_certificate:
account_key_content: "{{ account_private_key }}"
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key from hashi vault.
acme_certificate:
account_key_content: "{{ lookup('hashi_vault', 'secret=secret/account_private_key:value') }}"
csr: /etc/pki/cert/csr/sample.com.csr
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - copy:
# dest: /var/www/html/{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource'] }}
# content: "{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource_value'] }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
chain_dest: /etc/httpd/ssl/sample.com-intermediate.crt
data: "{{ sample_com_challenge }}"
### Example with DNS challenge against production ACME server ###
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
# Renew if the certificate is at least 30 days old
remaining_days: 60
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - route53:
# zone: sample.com
# record: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].record }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: route53 requires TXT entries to be enclosed in quotes
# value: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].resource_value | regex_replace('^(.*)$', '\"\\1\"') }}"
# when: sample_com_challenge is changed
#
# Alternative way:
#
# - route53:
# zone: sample.com
# record: "{{ item.key }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: item.value is a list of TXT entries, and route53
# # requires every entry to be enclosed in quotes
# value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
# loop: "{{ sample_com_challenge.challenge_data_dns | dictsort }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
fullchain: /etc/httpd/ssl/sample.com-fullchain.crt
chain: /etc/httpd/ssl/sample.com-intermediate.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ sample_com_challenge }}"
when: sample_com_challenge is changed
'''
RETURN = '''
cert_days:
description: The number of days the certificate remains valid.
returned: success
type: int
challenge_data:
description:
- Per identifier / challenge type challenge data.
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: complex
contains:
resource:
description: The challenge resource that must be created for validation.
returned: changed
type: str
sample: .well-known/acme-challenge/evaGxfADs6pSRb2LAv9IZf17Dt3juxGJ-PCt92wr-oA
resource_original:
description:
- The original challenge resource including type identifier for C(tls-alpn-01)
challenges.
returned: changed and challenge is C(tls-alpn-01)
type: str
sample: DNS:example.com
version_added: "2.8"
resource_value:
description:
- The value the resource has to produce for the validation.
- For C(http-01) and C(dns-01) challenges, the value can be used as-is.
- "For C(tls-alpn-01) challenges, note that this return value contains a
Base64 encoded version of the correct binary blob which has to be put
into the acmeValidation x509 extension; see
U(https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3)
for details. To do this, you might need the C(b64decode) Jinja filter
to extract the binary blob from this return value."
returned: changed
type: str
sample: IlirfxKKXA...17Dt3juxGJ-PCt92wr-oA
record:
description: The full DNS record's name for the challenge.
returned: changed and challenge is C(dns-01)
type: str
sample: _acme-challenge.example.com
version_added: "2.5"
challenge_data_dns:
description:
- List of TXT values per DNS record, in case challenge is C(dns-01).
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: dict
version_added: "2.5"
authorizations:
description: ACME authorization data.
returned: changed
type: complex
contains:
authorization:
description: ACME authorization object. See U(https://tools.ietf.org/html/rfc8555#section-7.1.4)
returned: success
type: dict
order_uri:
description: ACME order URI.
returned: changed
type: str
version_added: "2.5"
finalization_uri:
description: ACME finalization URI.
returned: changed
type: str
version_added: "2.5"
account_uri:
description: ACME account URI.
returned: changed
type: str
version_added: "2.5"
all_chains:
description:
- When I(retrieve_all_alternates) is set to C(yes), the module will query the ACME server
for alternate chains. This return value will contain a list of all chains returned,
the first entry being the main chain returned by the server.
- See L(Section 7.4.2 of RFC8555,https://tools.ietf.org/html/rfc8555#section-7.4.2) for details.
returned: when certificate was retrieved and I(retrieve_all_alternates) is set to C(yes)
type: list
contains:
cert:
description:
- The leaf certificate itself, in PEM format.
type: str
returned: always
chain:
description:
- The certificate chain, excluding the root, as concatenated PEM certificates.
type: str
returned: always
full_chain:
description:
- The certificate chain, excluding the root, but including the leaf certificate,
as concatenated PEM certificates.
type: str
returned: always
'''
from ansible.module_utils.acme import (
ModuleFailException,
write_file, nopad_b64, pem_to_der,
ACMEAccount,
HAS_CURRENT_CRYPTOGRAPHY,
cryptography_get_csr_identifiers,
openssl_get_csr_identifiers,
cryptography_get_cert_days,
set_crypto_backend,
process_links,
)
import base64
import hashlib
import locale
import os
import re
import textwrap
import time
import urllib
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat import ipaddress as compat_ipaddress
def get_cert_days(module, cert_file):
'''
Return the days the certificate in cert_file remains valid and -1
if the file was not found. If cert_file contains more than one
certificate, only the first one will be considered.
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_cert_days(module, cert_file)
if not os.path.exists(cert_file):
return -1
openssl_bin = module.get_bin_path('openssl', True)
openssl_cert_cmd = [openssl_bin, "x509", "-in", cert_file, "-noout", "-text"]
dummy, out, dummy = module.run_command(openssl_cert_cmd, check_rc=True, encoding=None)
try:
not_after_str = re.search(r"\s+Not After\s*:\s+(.*)", out.decode('utf8')).group(1)
not_after = datetime.fromtimestamp(time.mktime(time.strptime(not_after_str, '%b %d %H:%M:%S %Y %Z')))
except AttributeError:
raise ModuleFailException("No 'Not after' date found in {0}".format(cert_file))
except ValueError:
raise ModuleFailException("Failed to parse 'Not after' date of {0}".format(cert_file))
now = datetime.utcnow()
return (not_after - now).days
class ACMEClient(object):
'''
ACME client class. Uses an ACME account object and a CSR to
start and validate ACME challenges and download the respective
certificates.
'''
def __init__(self, module):
self.module = module
self.version = module.params['acme_version']
self.challenge = module.params['challenge']
self.csr = module.params['csr']
self.dest = module.params.get('dest')
self.fullchain_dest = module.params.get('fullchain_dest')
self.chain_dest = module.params.get('chain_dest')
self.account = ACMEAccount(module)
self.directory = self.account.directory
self.data = module.params['data']
self.authorizations = None
self.cert_days = -1
self.order_uri = self.data.get('order_uri') if self.data else None
self.finalize_uri = None
# Make sure account exists
modify_account = module.params['modify_account']
if modify_account or self.version > 1:
contact = []
if module.params['account_email']:
contact.append('mailto:' + module.params['account_email'])
created, account_data = self.account.setup_account(
contact,
agreement=module.params.get('agreement'),
terms_agreed=module.params.get('terms_agreed'),
allow_creation=modify_account,
)
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
updated = False
if not created and account_data and modify_account:
updated, account_data = self.account.update_account(account_data, contact)
self.changed = created or updated
else:
# This happens if modify_account is False and the ACME v1
# protocol is used. In this case, we do not call setup_account()
# to avoid accidental creation of an account. This is OK
# since for ACME v1, the account URI is not needed to send a
# signed ACME request.
pass
if not os.path.exists(self.csr):
raise ModuleFailException("CSR %s not found" % (self.csr))
self._openssl_bin = module.get_bin_path('openssl', True)
# Extract list of identifiers from CSR
self.identifiers = self._get_csr_identifiers()
def _get_csr_identifiers(self):
'''
Parse the CSR and return the list of requested identifiers
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_csr_identifiers(self.module, self.csr)
else:
return openssl_get_csr_identifiers(self._openssl_bin, self.module, self.csr)
def _add_or_update_auth(self, identifier_type, identifier, auth):
'''
Add or update the given authorization in the global authorizations list.
Return True if the auth was updated/added and False if no change was
necessary.
'''
if self.authorizations.get(identifier_type + ':' + identifier) == auth:
return False
self.authorizations[identifier_type + ':' + identifier] = auth
return True
def _new_authz_v1(self, identifier_type, identifier):
'''
Create a new authorization for the given identifier.
Return the authorization object of the new authorization
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.4
'''
if self.account.uri is None:
return
new_authz = {
"resource": "new-authz",
"identifier": {"type": identifier_type, "value": identifier},
}
result, info = self.account.send_signed_request(self.directory['new-authz'], new_authz)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error requesting challenges: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
result['uri'] = info['location']
return result
def _get_challenge_data(self, auth, identifier_type, identifier):
'''
Returns a dict with the data for all proposed (and supported) challenges
of the given authorization.
'''
data = {}
# no need to choose a specific challenge here as this module
# is not responsible for fulfilling the challenges. Calculate
# and return the required information for each challenge.
for challenge in auth['challenges']:
challenge_type = challenge['type']
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
if challenge_type == 'http-01':
# https://tools.ietf.org/html/rfc8555#section-8.3
resource = '.well-known/acme-challenge/' + token
data[challenge_type] = {'resource': resource, 'resource_value': keyauthorization}
elif challenge_type == 'dns-01':
if identifier_type != 'dns':
continue
# https://tools.ietf.org/html/rfc8555#section-8.4
resource = '_acme-challenge'
value = nopad_b64(hashlib.sha256(to_bytes(keyauthorization)).digest())
record = (resource + identifier[1:]) if identifier.startswith('*.') else (resource + '.' + identifier)
data[challenge_type] = {'resource': resource, 'resource_value': value, 'record': record}
elif challenge_type == 'tls-alpn-01':
# https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3
if identifier_type == 'ip':
# IPv4/IPv6 address: use reverse mapping (RFC1034, RFC3596)
resource = compat_ipaddress.ip_address(identifier).reverse_pointer
if not resource.endswith('.'):
resource += '.'
else:
resource = identifier
value = base64.b64encode(hashlib.sha256(to_bytes(keyauthorization)).digest())
data[challenge_type] = {'resource': resource, 'resource_original': identifier_type + ':' + identifier, 'resource_value': value}
else:
continue
return data
def _fail_challenge(self, identifier_type, identifier, auth, error):
'''
Aborts with a specific error for a challenge.
'''
error_details = ''
# multiple challenges could have failed at this point, gather error
# details for all of them before failing
for challenge in auth['challenges']:
if challenge['status'] == 'invalid':
error_details += ' CHALLENGE: {0}'.format(challenge['type'])
if 'error' in challenge:
error_details += ' DETAILS: {0};'.format(challenge['error']['detail'])
else:
error_details += ';'
raise ModuleFailException("{0}: {1}".format(error.format(identifier_type + ':' + identifier), error_details))
def _validate_challenges(self, identifier_type, identifier, auth):
'''
Validate the authorization provided in the auth dict. Returns True
when the validation was successful and False when it was not.
'''
for challenge in auth['challenges']:
if self.challenge != challenge['type']:
continue
uri = challenge['uri'] if self.version == 1 else challenge['url']
challenge_response = {}
if self.version == 1:
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
challenge_response["resource"] = "challenge"
challenge_response["keyAuthorization"] = keyauthorization
result, info = self.account.send_signed_request(uri, challenge_response)
if info['status'] not in [200, 202]:
raise ModuleFailException("Error validating challenge: CODE: {0} RESULT: {1}".format(info['status'], result))
status = ''
while status not in ['valid', 'invalid', 'revoked']:
result, dummy = self.account.get_request(auth['uri'])
result['uri'] = auth['uri']
if self._add_or_update_auth(identifier_type, identifier, result):
self.changed = True
# https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.1.2
# "status (required, string): ...
# If this field is missing, then the default value is "pending"."
if self.version == 1 and 'status' not in result:
status = 'pending'
else:
status = result['status']
time.sleep(2)
if status == 'invalid':
self._fail_challenge(identifier_type, identifier, result, 'Authorization for {0} returned invalid')
return status == 'valid'
def _finalize_cert(self):
'''
Create a new certificate based on the csr.
Return the certificate object as dict
https://tools.ietf.org/html/rfc8555#section-7.4
'''
csr = pem_to_der(self.csr)
new_cert = {
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.finalize_uri, new_cert)
if info['status'] not in [200]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
status = result['status']
while status not in ['valid', 'invalid']:
time.sleep(2)
result, dummy = self.account.get_request(self.order_uri)
status = result['status']
if status != 'valid':
raise ModuleFailException("Error new cert: CODE: {0} STATUS: {1} RESULT: {2}".format(info['status'], status, result))
return result['certificate']
def _der_to_pem(self, der_cert):
'''
Convert the DER format certificate in der_cert to a PEM format
certificate and return it.
'''
return """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(der_cert).decode('utf8'), 64)))
def _download_cert(self, url):
'''
Download and parse the certificate chain.
https://tools.ietf.org/html/rfc8555#section-7.4.2
'''
content, info = self.account.get_request(url, parse_json_result=False, headers={'Accept': 'application/pem-certificate-chain'})
if not content or not info['content-type'].startswith('application/pem-certificate-chain'):
raise ModuleFailException("Cannot download certificate chain from {0}: {1} (headers: {2})".format(url, content, info))
cert = None
chain = []
# Parse data
lines = content.decode('utf-8').splitlines(True)
current = []
for line in lines:
if line.strip():
current.append(line)
if line.startswith('-----END CERTIFICATE-----'):
if cert is None:
cert = ''.join(current)
else:
chain.append(''.join(current))
current = []
alternates = []
def f(link, relation):
if relation == 'up':
# Process link-up headers if there was no chain in reply
if not chain:
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.append(self._der_to_pem(chain_result))
elif relation == 'alternate':
alternates.append(link)
process_links(info, f)
if cert is None or current:
raise ModuleFailException("Failed to parse certificate chain download from {0}: {1} (headers: {2})".format(url, content, info))
return {'cert': cert, 'chain': chain, 'alternates': alternates}
def _new_cert_v1(self):
'''
Create a new certificate based on the CSR (ACME v1 protocol).
Return the certificate object as dict
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.5
'''
csr = pem_to_der(self.csr)
new_cert = {
"resource": "new-cert",
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.directory['new-cert'], new_cert)
chain = []
def f(link, relation):
if relation == 'up':
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.clear()
chain.append(self._der_to_pem(chain_result))
process_links(info, f)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
return {'cert': self._der_to_pem(result), 'uri': info['location'], 'chain': chain}
def _new_order_v2(self):
'''
Start a new certificate order (ACME v2 protocol).
https://tools.ietf.org/html/rfc8555#section-7.4
'''
identifiers = []
for identifier_type, identifier in self.identifiers:
identifiers.append({
'type': identifier_type,
'value': identifier,
})
new_order = {
"identifiers": identifiers
}
result, info = self.account.send_signed_request(self.directory['newOrder'], new_order)
if info['status'] not in [201]:
raise ModuleFailException("Error new order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.order_uri = info['location']
self.finalize_uri = result['finalize']
def is_first_step(self):
'''
Return True if this is the first execution of this module, i.e. if a
sufficient data object from a first run has not been provided.
'''
if self.data is None:
return True
if self.version == 1:
# As soon as self.data is a non-empty object, we are in the second stage.
return not self.data
else:
# We are in the second stage if data.order_uri is given (which has been
# stored in self.order_uri by the constructor).
return self.order_uri is None
def start_challenges(self):
'''
Create new authorizations for all identifiers of the CSR,
respectively start a new order for ACME v2.
'''
self.authorizations = {}
if self.version == 1:
for identifier_type, identifier in self.identifiers:
if identifier_type != 'dns':
raise ModuleFailException('ACME v1 only supports DNS identifiers!')
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
self._new_order_v2()
self.changed = True
def get_challenges_data(self):
'''
Get challenge details for the chosen challenge type.
Return a tuple of generic challenge details, and specialized DNS challenge details.
'''
# Get general challenge data
data = {}
for type_identifier, auth in self.authorizations.items():
identifier_type, identifier = type_identifier.split(':', 1)
auth = self.authorizations[type_identifier]
# Skip valid authentications: their challenges are already valid
# and do not need to be returned
if auth['status'] == 'valid':
continue
# We drop the type from the key to preserve backwards compatibility
data[identifier] = self._get_challenge_data(auth, identifier_type, identifier)
# Get DNS challenge data
data_dns = {}
if self.challenge == 'dns-01':
for identifier, challenges in data.items():
if self.challenge in challenges:
values = data_dns.get(challenges[self.challenge]['record'])
if values is None:
values = []
data_dns[challenges[self.challenge]['record']] = values
values.append(challenges[self.challenge]['resource_value'])
return data, data_dns
def finish_challenges(self):
'''
Verify challenges for all identifiers of the CSR.
'''
self.authorizations = {}
# Step 1: obtain challenge information
if self.version == 1:
# For ACME v1, we attempt to create new authzs. Existing ones
# will be returned instead.
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
# For ACME v2, we obtain the order object by fetching the
# order URI, and extract the information from there.
result, info = self.account.get_request(self.order_uri)
if not result:
raise ModuleFailException("Cannot download order from {0}: {1} (headers: {2})".format(self.order_uri, result, info))
if info['status'] not in [200]:
raise ModuleFailException("Error on downloading order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.finalize_uri = result['finalize']
# Step 2: validate challenges
for type_identifier, auth in self.authorizations.items():
if auth['status'] == 'pending':
identifier_type, identifier = type_identifier.split(':', 1)
self._validate_challenges(identifier_type, identifier, auth)
def get_certificate(self):
'''
Request a new certificate and write it to the destination file.
First verifies whether all authorizations are valid; if not, aborts
with an error.
'''
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None:
raise ModuleFailException('Found no authorization information for "{0}"!'.format(identifier_type + ':' + identifier))
if 'status' not in auth:
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned no status')
if auth['status'] != 'valid':
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned status ' + str(auth['status']))
if self.version == 1:
cert = self._new_cert_v1()
else:
cert_uri = self._finalize_cert()
cert = self._download_cert(cert_uri)
if self.module.params['retrieve_all_alternates']:
alternate_chains = []
for alternate in cert['alternates']:
try:
alt_cert = self._download_cert(alternate)
except ModuleFailException as e:
self.module.warn('Error while downloading alternative certificate {0}: {1}'.format(alternate, e))
continue
alternate_chains.append(alt_cert)
self.all_chains = []
def _append_all_chains(cert_data):
self.all_chains.append(dict(
cert=cert_data['cert'].encode('utf8'),
chain=("\n".join(cert_data.get('chain', []))).encode('utf8'),
full_chain=(cert_data['cert'] + "\n".join(cert_data.get('chain', []))).encode('utf8'),
))
_append_all_chains(cert)
for alt_chain in alternate_chains:
_append_all_chains(alt_chain)
if cert['cert'] is not None:
pem_cert = cert['cert']
chain = [link for link in cert.get('chain', [])]
if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):
self.cert_days = get_cert_days(self.module, self.dest)
self.changed = True
if self.fullchain_dest and write_file(self.module, self.fullchain_dest, (pem_cert + "\n".join(chain)).encode('utf8')):
self.cert_days = get_cert_days(self.module, self.fullchain_dest)
self.changed = True
if self.chain_dest and write_file(self.module, self.chain_dest, ("\n".join(chain)).encode('utf8')):
self.changed = True
def deactivate_authzs(self):
'''
Deactivates all valid authz's. Does not raise exceptions.
https://community.letsencrypt.org/t/authorization-deactivation/19860/2
https://tools.ietf.org/html/rfc8555#section-7.5.2
'''
authz_deactivate = {
'status': 'deactivated'
}
if self.version == 1:
authz_deactivate['resource'] = 'authz'
if self.authorizations:
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None or auth.get('status') != 'valid':
continue
try:
result, info = self.account.send_signed_request(auth['uri'], authz_deactivate)
if 200 <= info['status'] < 300 and result.get('status') == 'deactivated':
auth['status'] = 'deactivated'
except Exception as dummy:
# Ignore errors on deactivating authzs
pass
if auth.get('status') != 'deactivated':
self.module.warn(warning='Could not deactivate authz object {0}.'.format(auth['uri']))
def main():
module = AnsibleModule(
argument_spec=dict(
account_key_src=dict(type='path', aliases=['account_key']),
account_key_content=dict(type='str', no_log=True),
account_uri=dict(type='str'),
modify_account=dict(type='bool', default=True),
acme_directory=dict(type='str', default='https://acme-staging.api.letsencrypt.org/directory'),
acme_version=dict(type='int', default=1, choices=[1, 2]),
validate_certs=dict(default=True, type='bool'),
account_email=dict(type='str'),
agreement=dict(type='str'),
terms_agreed=dict(type='bool', default=False),
challenge=dict(type='str', default='http-01', choices=['http-01', 'dns-01', 'tls-alpn-01']),
csr=dict(type='path', required=True, aliases=['src']),
data=dict(type='dict'),
dest=dict(type='path', aliases=['cert']),
fullchain_dest=dict(type='path', aliases=['fullchain']),
chain_dest=dict(type='path', aliases=['chain']),
remaining_days=dict(type='int', default=10),
deactivate_authzs=dict(type='bool', default=False),
force=dict(type='bool', default=False),
retrieve_all_alternates=dict(type='bool', default=False),
select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'openssl', 'cryptography']),
),
required_one_of=(
['account_key_src', 'account_key_content'],
['dest', 'fullchain_dest'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content'],
),
supports_check_mode=True,
)
if module._name == 'letsencrypt':
module.deprecate("The 'letsencrypt' module is being renamed 'acme_certificate'", version='2.10')
set_crypto_backend(module)
# AnsibleModule() changes the locale, so change it back to C because we rely on time.strptime() when parsing certificate dates.
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
locale.setlocale(locale.LC_ALL, 'C')
if not module.params.get('validate_certs'):
module.warn(warning='Disabling certificate validation for communications with ACME endpoint. ' +
'This should only be done for testing against a local ACME server for ' +
'development purposes, but *never* for production purposes.')
try:
if module.params.get('dest'):
cert_days = get_cert_days(module, module.params['dest'])
else:
cert_days = get_cert_days(module, module.params['fullchain_dest'])
if module.params['force'] or cert_days < module.params['remaining_days']:
# If checkmode is active, base the changed state solely on the status
# of the certificate file as all other actions (accessing an account, checking
# the authorization status...) would lead to potential changes of the current
# state
if module.check_mode:
module.exit_json(changed=True, authorizations={}, challenge_data={}, cert_days=cert_days)
else:
client = ACMEClient(module)
client.cert_days = cert_days
other = dict()
if client.is_first_step():
# First run: start challenges / start new order
client.start_challenges()
else:
# Second run: finish challenges, and get certificate
try:
client.finish_challenges()
client.get_certificate()
if module.params['retrieve_all_alternates']:
other['all_chains'] = client.all_chains
finally:
if module.params['deactivate_authzs']:
client.deactivate_authzs()
data, data_dns = client.get_challenges_data()
auths = dict()
for k, v in client.authorizations.items():
# Remove "type:" from key
auths[k.split(':', 1)[1]] = v
module.exit_json(
changed=client.changed,
authorizations=auths,
finalize_uri=client.finalize_uri,
order_uri=client.order_uri,
account_uri=client.account.uri,
challenge_data=data,
challenge_data_dns=data_dns,
cert_days=client.cert_days,
**other
)
else:
module.exit_json(changed=False, cert_days=cert_days)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,890 |
acme_certificate contains deprecated call to be removed in 2.10
|
##### SUMMARY
acme_certificate contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/crypto/acme/acme_certificate.py:1024:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/crypto/acme/acme_certificate.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61890
|
https://github.com/ansible/ansible/pull/61648
|
14bccef2c207584bf19132fbbf10ab2237746b9e
|
a0bec0bc327d29f446a031abc803b8d2cad1949f
| 2019-09-05T20:41:11Z |
python
| 2019-09-14T21:24:32Z |
lib/ansible/modules/crypto/acme/acme_certificate.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate
author: "Michael Gruener (@mgruener)"
version_added: "2.2"
short_description: Create SSL/TLS certificates with the ACME protocol
description:
- "Create and renew SSL/TLS certificates with a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/). The current
implementation supports the C(http-01), C(dns-01) and C(tls-alpn-01)
challenges."
- "To use this module, it has to be executed twice. Either as two
different tasks in the same run or during two runs. Note that the output
of the first run needs to be recorded and passed to the second run as the
module argument C(data)."
- "Between these two tasks you have to fulfill the required steps for the
chosen challenge by whatever means necessary. For C(http-01) that means
creating the necessary challenge file on the destination webserver. For
C(dns-01) the necessary dns record has to be created. For C(tls-alpn-01)
the necessary certificate has to be created and served.
It is I(not) the responsibility of this module to perform these steps."
- "For details on how to fulfill these challenges, you might have to read through
L(the main ACME specification,https://tools.ietf.org/html/rfc8555#section-8)
and the L(TLS-ALPN-01 specification,https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3).
Also, consider the examples provided for this module."
- "The module includes experimental support for IP identifiers according to
the L(current ACME IP draft,https://tools.ietf.org/html/draft-ietf-acme-ip-05)."
notes:
- "At least one of C(dest) and C(fullchain_dest) must be specified."
- "This module includes basic account management functionality.
If you want to have more control over your ACME account, use the M(acme_account)
module and disable account management for this module using the C(modify_account)
option."
- "This module was called C(letsencrypt) before Ansible 2.6. The usage
did not change."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- name: ACME TLS ALPN Challenge Extension
description: The current draft specification of the C(tls-alpn-01) challenge.
link: https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05
- module: acme_challenge_cert_helper
description: Helps preparing C(tls-alpn-01) challenges.
- module: openssl_privatekey
description: Can be used to create private keys (both for certificates and accounts).
- module: openssl_csr
description: Can be used to create a Certificate Signing Request (CSR).
- module: certificate_complete_chain
description: Allows to find the root certificate for the returned fullchain.
- module: acme_certificate_revoke
description: Allows to revoke certificates.
- module: acme_account
description: Allows to create, modify or delete an ACME account.
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
account_email:
description:
- "The email address associated with this account."
- "It will be used for certificate expiration warnings."
- "Note that when C(modify_account) is not set to C(no) and you also
used the M(acme_account) module to specify more than one contact
for your account, this module will update your account and restrict
it to the (at most one) contact email address specified here."
type: str
agreement:
description:
- "URI to a terms of service document you agree to when using the
ACME v1 service at C(acme_directory)."
- Default is latest gathered from C(acme_directory) URL.
- This option will only be used when C(acme_version) is 1.
type: str
terms_agreed:
description:
- "Boolean indicating whether you agree to the terms of service document."
- "ACME servers can require this to be true."
- This option will only be used when C(acme_version) is not 1.
type: bool
default: no
version_added: "2.5"
modify_account:
description:
- "Boolean indicating whether the module should create the account if
necessary, and update its contact data."
- "Set to C(no) if you want to use the M(acme_account) module to manage
your account instead, and to avoid accidental creation of a new account
using an old key if you changed the account key with M(acme_account)."
- "If set to C(no), C(terms_agreed) and C(account_email) are ignored."
type: bool
default: yes
version_added: "2.6"
challenge:
description: The challenge to be performed.
type: str
default: 'http-01'
choices: [ 'http-01', 'dns-01', 'tls-alpn-01' ]
csr:
description:
- "File containing the CSR for the new certificate."
- "Can be created with C(openssl req ...)."
- "The CSR may contain multiple Subject Alternate Names, but each one
will lead to an individual challenge that must be fulfilled for the
CSR to be signed."
- "I(Note): the private key used to create the CSR I(must not) be the
account key. This is a bad idea from a security point of view, and
the CA should not accept the CSR. The ACME server should return an
error in this case."
type: path
required: true
aliases: ['src']
data:
description:
- "The data to validate ongoing challenges. This must be specified for
the second run of the module only."
- "The value that must be used here will be provided by a previous use
of this module. See the examples for more details."
- "Note that for ACME v2, only the C(order_uri) entry of C(data) will
be used. For ACME v1, C(data) must be non-empty to indicate the
second stage is active; all needed data will be taken from the
CSR."
- "I(Note): the C(data) option was marked as C(no_log) up to
Ansible 2.5. From Ansible 2.6 on, it is no longer marked this way
as it causes error messages to be come unusable, and C(data) does
not contain any information which can be used without having
access to the account key or which are not public anyway."
type: dict
dest:
description:
- "The destination file for the certificate."
- "Required if C(fullchain_dest) is not specified."
type: path
aliases: ['cert']
fullchain_dest:
description:
- "The destination file for the full chain (i.e. certificate followed
by chain of intermediate certificates)."
- "Required if C(dest) is not specified."
type: path
version_added: 2.5
aliases: ['fullchain']
chain_dest:
description:
- If specified, the intermediate certificate will be written to this file.
type: path
version_added: 2.5
aliases: ['chain']
remaining_days:
description:
- "The number of days the certificate must have left being valid.
If C(cert_days < remaining_days), then it will be renewed.
If the certificate is not renewed, module return values will not
include C(challenge_data)."
- "To make sure that the certificate is renewed in any case, you can
use the C(force) option."
type: int
default: 10
deactivate_authzs:
description:
- "Deactivate authentication objects (authz) after issuing a certificate,
or when issuing the certificate failed."
- "Authentication objects are bound to an account key and remain valid
for a certain amount of time, and can be used to issue certificates
without having to re-authenticate the domain. This can be a security
concern."
type: bool
default: no
version_added: 2.6
force:
description:
- Enforces the execution of the challenge and validation, even if an
existing certificate is still valid for more than C(remaining_days).
- This is especially helpful when having an updated CSR e.g. with
additional domains for which a new certificate is desired.
type: bool
default: no
version_added: 2.6
retrieve_all_alternates:
description:
- "When set to C(yes), will retrieve all alternate chains offered by the ACME CA.
These will not be written to disk, but will be returned together with the main
chain as C(all_chains). See the documentation for the C(all_chains) return
value for details."
type: bool
default: no
version_added: "2.9"
'''
EXAMPLES = r'''
### Example with HTTP challenge ###
- name: Create a challenge for sample.com using a account key from a variable.
acme_certificate:
account_key_content: "{{ account_private_key }}"
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key from hashi vault.
acme_certificate:
account_key_content: "{{ lookup('hashi_vault', 'secret=secret/account_private_key:value') }}"
csr: /etc/pki/cert/csr/sample.com.csr
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - copy:
# dest: /var/www/html/{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource'] }}
# content: "{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource_value'] }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
chain_dest: /etc/httpd/ssl/sample.com-intermediate.crt
data: "{{ sample_com_challenge }}"
### Example with DNS challenge against production ACME server ###
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
# Renew if the certificate is at least 30 days old
remaining_days: 60
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - route53:
# zone: sample.com
# record: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].record }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: route53 requires TXT entries to be enclosed in quotes
# value: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].resource_value | regex_replace('^(.*)$', '\"\\1\"') }}"
# when: sample_com_challenge is changed
#
# Alternative way:
#
# - route53:
# zone: sample.com
# record: "{{ item.key }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: item.value is a list of TXT entries, and route53
# # requires every entry to be enclosed in quotes
# value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
# loop: "{{ sample_com_challenge.challenge_data_dns | dictsort }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
fullchain: /etc/httpd/ssl/sample.com-fullchain.crt
chain: /etc/httpd/ssl/sample.com-intermediate.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ sample_com_challenge }}"
when: sample_com_challenge is changed
'''
RETURN = '''
cert_days:
description: The number of days the certificate remains valid.
returned: success
type: int
challenge_data:
description:
- Per identifier / challenge type challenge data.
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: complex
contains:
resource:
description: The challenge resource that must be created for validation.
returned: changed
type: str
sample: .well-known/acme-challenge/evaGxfADs6pSRb2LAv9IZf17Dt3juxGJ-PCt92wr-oA
resource_original:
description:
- The original challenge resource including type identifier for C(tls-alpn-01)
challenges.
returned: changed and challenge is C(tls-alpn-01)
type: str
sample: DNS:example.com
version_added: "2.8"
resource_value:
description:
- The value the resource has to produce for the validation.
- For C(http-01) and C(dns-01) challenges, the value can be used as-is.
- "For C(tls-alpn-01) challenges, note that this return value contains a
Base64 encoded version of the correct binary blob which has to be put
into the acmeValidation x509 extension; see
U(https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3)
for details. To do this, you might need the C(b64decode) Jinja filter
to extract the binary blob from this return value."
returned: changed
type: str
sample: IlirfxKKXA...17Dt3juxGJ-PCt92wr-oA
record:
description: The full DNS record's name for the challenge.
returned: changed and challenge is C(dns-01)
type: str
sample: _acme-challenge.example.com
version_added: "2.5"
challenge_data_dns:
description:
- List of TXT values per DNS record, in case challenge is C(dns-01).
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: dict
version_added: "2.5"
authorizations:
description: ACME authorization data.
returned: changed
type: complex
contains:
authorization:
description: ACME authorization object. See U(https://tools.ietf.org/html/rfc8555#section-7.1.4)
returned: success
type: dict
order_uri:
description: ACME order URI.
returned: changed
type: str
version_added: "2.5"
finalization_uri:
description: ACME finalization URI.
returned: changed
type: str
version_added: "2.5"
account_uri:
description: ACME account URI.
returned: changed
type: str
version_added: "2.5"
all_chains:
description:
- When I(retrieve_all_alternates) is set to C(yes), the module will query the ACME server
for alternate chains. This return value will contain a list of all chains returned,
the first entry being the main chain returned by the server.
- See L(Section 7.4.2 of RFC8555,https://tools.ietf.org/html/rfc8555#section-7.4.2) for details.
returned: when certificate was retrieved and I(retrieve_all_alternates) is set to C(yes)
type: list
contains:
cert:
description:
- The leaf certificate itself, in PEM format.
type: str
returned: always
chain:
description:
- The certificate chain, excluding the root, as concatenated PEM certificates.
type: str
returned: always
full_chain:
description:
- The certificate chain, excluding the root, but including the leaf certificate,
as concatenated PEM certificates.
type: str
returned: always
'''
from ansible.module_utils.acme import (
ModuleFailException,
write_file, nopad_b64, pem_to_der,
ACMEAccount,
HAS_CURRENT_CRYPTOGRAPHY,
cryptography_get_csr_identifiers,
openssl_get_csr_identifiers,
cryptography_get_cert_days,
set_crypto_backend,
process_links,
)
import base64
import hashlib
import locale
import os
import re
import textwrap
import time
import urllib
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes
from ansible.module_utils.compat import ipaddress as compat_ipaddress
def get_cert_days(module, cert_file):
'''
Return the days the certificate in cert_file remains valid and -1
if the file was not found. If cert_file contains more than one
certificate, only the first one will be considered.
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_cert_days(module, cert_file)
if not os.path.exists(cert_file):
return -1
openssl_bin = module.get_bin_path('openssl', True)
openssl_cert_cmd = [openssl_bin, "x509", "-in", cert_file, "-noout", "-text"]
dummy, out, dummy = module.run_command(openssl_cert_cmd, check_rc=True, encoding=None)
try:
not_after_str = re.search(r"\s+Not After\s*:\s+(.*)", out.decode('utf8')).group(1)
not_after = datetime.fromtimestamp(time.mktime(time.strptime(not_after_str, '%b %d %H:%M:%S %Y %Z')))
except AttributeError:
raise ModuleFailException("No 'Not after' date found in {0}".format(cert_file))
except ValueError:
raise ModuleFailException("Failed to parse 'Not after' date of {0}".format(cert_file))
now = datetime.utcnow()
return (not_after - now).days
class ACMEClient(object):
'''
ACME client class. Uses an ACME account object and a CSR to
start and validate ACME challenges and download the respective
certificates.
'''
def __init__(self, module):
self.module = module
self.version = module.params['acme_version']
self.challenge = module.params['challenge']
self.csr = module.params['csr']
self.dest = module.params.get('dest')
self.fullchain_dest = module.params.get('fullchain_dest')
self.chain_dest = module.params.get('chain_dest')
self.account = ACMEAccount(module)
self.directory = self.account.directory
self.data = module.params['data']
self.authorizations = None
self.cert_days = -1
self.order_uri = self.data.get('order_uri') if self.data else None
self.finalize_uri = None
# Make sure account exists
modify_account = module.params['modify_account']
if modify_account or self.version > 1:
contact = []
if module.params['account_email']:
contact.append('mailto:' + module.params['account_email'])
created, account_data = self.account.setup_account(
contact,
agreement=module.params.get('agreement'),
terms_agreed=module.params.get('terms_agreed'),
allow_creation=modify_account,
)
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
updated = False
if not created and account_data and modify_account:
updated, account_data = self.account.update_account(account_data, contact)
self.changed = created or updated
else:
# This happens if modify_account is False and the ACME v1
# protocol is used. In this case, we do not call setup_account()
# to avoid accidental creation of an account. This is OK
# since for ACME v1, the account URI is not needed to send a
# signed ACME request.
pass
if not os.path.exists(self.csr):
raise ModuleFailException("CSR %s not found" % (self.csr))
self._openssl_bin = module.get_bin_path('openssl', True)
# Extract list of identifiers from CSR
self.identifiers = self._get_csr_identifiers()
def _get_csr_identifiers(self):
'''
Parse the CSR and return the list of requested identifiers
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_csr_identifiers(self.module, self.csr)
else:
return openssl_get_csr_identifiers(self._openssl_bin, self.module, self.csr)
def _add_or_update_auth(self, identifier_type, identifier, auth):
'''
Add or update the given authorization in the global authorizations list.
Return True if the auth was updated/added and False if no change was
necessary.
'''
if self.authorizations.get(identifier_type + ':' + identifier) == auth:
return False
self.authorizations[identifier_type + ':' + identifier] = auth
return True
def _new_authz_v1(self, identifier_type, identifier):
'''
Create a new authorization for the given identifier.
Return the authorization object of the new authorization
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.4
'''
if self.account.uri is None:
return
new_authz = {
"resource": "new-authz",
"identifier": {"type": identifier_type, "value": identifier},
}
result, info = self.account.send_signed_request(self.directory['new-authz'], new_authz)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error requesting challenges: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
result['uri'] = info['location']
return result
def _get_challenge_data(self, auth, identifier_type, identifier):
'''
Returns a dict with the data for all proposed (and supported) challenges
of the given authorization.
'''
data = {}
# no need to choose a specific challenge here as this module
# is not responsible for fulfilling the challenges. Calculate
# and return the required information for each challenge.
for challenge in auth['challenges']:
challenge_type = challenge['type']
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
if challenge_type == 'http-01':
# https://tools.ietf.org/html/rfc8555#section-8.3
resource = '.well-known/acme-challenge/' + token
data[challenge_type] = {'resource': resource, 'resource_value': keyauthorization}
elif challenge_type == 'dns-01':
if identifier_type != 'dns':
continue
# https://tools.ietf.org/html/rfc8555#section-8.4
resource = '_acme-challenge'
value = nopad_b64(hashlib.sha256(to_bytes(keyauthorization)).digest())
record = (resource + identifier[1:]) if identifier.startswith('*.') else (resource + '.' + identifier)
data[challenge_type] = {'resource': resource, 'resource_value': value, 'record': record}
elif challenge_type == 'tls-alpn-01':
# https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3
if identifier_type == 'ip':
# IPv4/IPv6 address: use reverse mapping (RFC1034, RFC3596)
resource = compat_ipaddress.ip_address(identifier).reverse_pointer
if not resource.endswith('.'):
resource += '.'
else:
resource = identifier
value = base64.b64encode(hashlib.sha256(to_bytes(keyauthorization)).digest())
data[challenge_type] = {'resource': resource, 'resource_original': identifier_type + ':' + identifier, 'resource_value': value}
else:
continue
return data
def _fail_challenge(self, identifier_type, identifier, auth, error):
'''
Aborts with a specific error for a challenge.
'''
error_details = ''
# multiple challenges could have failed at this point, gather error
# details for all of them before failing
for challenge in auth['challenges']:
if challenge['status'] == 'invalid':
error_details += ' CHALLENGE: {0}'.format(challenge['type'])
if 'error' in challenge:
error_details += ' DETAILS: {0};'.format(challenge['error']['detail'])
else:
error_details += ';'
raise ModuleFailException("{0}: {1}".format(error.format(identifier_type + ':' + identifier), error_details))
def _validate_challenges(self, identifier_type, identifier, auth):
'''
Validate the authorization provided in the auth dict. Returns True
when the validation was successful and False when it was not.
'''
for challenge in auth['challenges']:
if self.challenge != challenge['type']:
continue
uri = challenge['uri'] if self.version == 1 else challenge['url']
challenge_response = {}
if self.version == 1:
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
challenge_response["resource"] = "challenge"
challenge_response["keyAuthorization"] = keyauthorization
result, info = self.account.send_signed_request(uri, challenge_response)
if info['status'] not in [200, 202]:
raise ModuleFailException("Error validating challenge: CODE: {0} RESULT: {1}".format(info['status'], result))
status = ''
while status not in ['valid', 'invalid', 'revoked']:
result, dummy = self.account.get_request(auth['uri'])
result['uri'] = auth['uri']
if self._add_or_update_auth(identifier_type, identifier, result):
self.changed = True
# https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.1.2
# "status (required, string): ...
# If this field is missing, then the default value is "pending"."
if self.version == 1 and 'status' not in result:
status = 'pending'
else:
status = result['status']
time.sleep(2)
if status == 'invalid':
self._fail_challenge(identifier_type, identifier, result, 'Authorization for {0} returned invalid')
return status == 'valid'
def _finalize_cert(self):
'''
Create a new certificate based on the csr.
Return the certificate object as dict
https://tools.ietf.org/html/rfc8555#section-7.4
'''
csr = pem_to_der(self.csr)
new_cert = {
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.finalize_uri, new_cert)
if info['status'] not in [200]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
status = result['status']
while status not in ['valid', 'invalid']:
time.sleep(2)
result, dummy = self.account.get_request(self.order_uri)
status = result['status']
if status != 'valid':
raise ModuleFailException("Error new cert: CODE: {0} STATUS: {1} RESULT: {2}".format(info['status'], status, result))
return result['certificate']
def _der_to_pem(self, der_cert):
'''
Convert the DER format certificate in der_cert to a PEM format
certificate and return it.
'''
return """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(der_cert).decode('utf8'), 64)))
def _download_cert(self, url):
'''
Download and parse the certificate chain.
https://tools.ietf.org/html/rfc8555#section-7.4.2
'''
content, info = self.account.get_request(url, parse_json_result=False, headers={'Accept': 'application/pem-certificate-chain'})
if not content or not info['content-type'].startswith('application/pem-certificate-chain'):
raise ModuleFailException("Cannot download certificate chain from {0}: {1} (headers: {2})".format(url, content, info))
cert = None
chain = []
# Parse data
lines = content.decode('utf-8').splitlines(True)
current = []
for line in lines:
if line.strip():
current.append(line)
if line.startswith('-----END CERTIFICATE-----'):
if cert is None:
cert = ''.join(current)
else:
chain.append(''.join(current))
current = []
alternates = []
def f(link, relation):
if relation == 'up':
# Process link-up headers if there was no chain in reply
if not chain:
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.append(self._der_to_pem(chain_result))
elif relation == 'alternate':
alternates.append(link)
process_links(info, f)
if cert is None or current:
raise ModuleFailException("Failed to parse certificate chain download from {0}: {1} (headers: {2})".format(url, content, info))
return {'cert': cert, 'chain': chain, 'alternates': alternates}
def _new_cert_v1(self):
'''
Create a new certificate based on the CSR (ACME v1 protocol).
Return the certificate object as dict
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.5
'''
csr = pem_to_der(self.csr)
new_cert = {
"resource": "new-cert",
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.directory['new-cert'], new_cert)
chain = []
def f(link, relation):
if relation == 'up':
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.clear()
chain.append(self._der_to_pem(chain_result))
process_links(info, f)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
return {'cert': self._der_to_pem(result), 'uri': info['location'], 'chain': chain}
def _new_order_v2(self):
'''
Start a new certificate order (ACME v2 protocol).
https://tools.ietf.org/html/rfc8555#section-7.4
'''
identifiers = []
for identifier_type, identifier in self.identifiers:
identifiers.append({
'type': identifier_type,
'value': identifier,
})
new_order = {
"identifiers": identifiers
}
result, info = self.account.send_signed_request(self.directory['newOrder'], new_order)
if info['status'] not in [201]:
raise ModuleFailException("Error new order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.order_uri = info['location']
self.finalize_uri = result['finalize']
def is_first_step(self):
'''
Return True if this is the first execution of this module, i.e. if a
sufficient data object from a first run has not been provided.
'''
if self.data is None:
return True
if self.version == 1:
# As soon as self.data is a non-empty object, we are in the second stage.
return not self.data
else:
# We are in the second stage if data.order_uri is given (which has been
# stored in self.order_uri by the constructor).
return self.order_uri is None
def start_challenges(self):
'''
Create new authorizations for all identifiers of the CSR,
respectively start a new order for ACME v2.
'''
self.authorizations = {}
if self.version == 1:
for identifier_type, identifier in self.identifiers:
if identifier_type != 'dns':
raise ModuleFailException('ACME v1 only supports DNS identifiers!')
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
self._new_order_v2()
self.changed = True
def get_challenges_data(self):
'''
Get challenge details for the chosen challenge type.
Return a tuple of generic challenge details, and specialized DNS challenge details.
'''
# Get general challenge data
data = {}
for type_identifier, auth in self.authorizations.items():
identifier_type, identifier = type_identifier.split(':', 1)
auth = self.authorizations[type_identifier]
# Skip valid authentications: their challenges are already valid
# and do not need to be returned
if auth['status'] == 'valid':
continue
# We drop the type from the key to preserve backwards compatibility
data[identifier] = self._get_challenge_data(auth, identifier_type, identifier)
# Get DNS challenge data
data_dns = {}
if self.challenge == 'dns-01':
for identifier, challenges in data.items():
if self.challenge in challenges:
values = data_dns.get(challenges[self.challenge]['record'])
if values is None:
values = []
data_dns[challenges[self.challenge]['record']] = values
values.append(challenges[self.challenge]['resource_value'])
return data, data_dns
def finish_challenges(self):
'''
Verify challenges for all identifiers of the CSR.
'''
self.authorizations = {}
# Step 1: obtain challenge information
if self.version == 1:
# For ACME v1, we attempt to create new authzs. Existing ones
# will be returned instead.
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
# For ACME v2, we obtain the order object by fetching the
# order URI, and extract the information from there.
result, info = self.account.get_request(self.order_uri)
if not result:
raise ModuleFailException("Cannot download order from {0}: {1} (headers: {2})".format(self.order_uri, result, info))
if info['status'] not in [200]:
raise ModuleFailException("Error on downloading order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.finalize_uri = result['finalize']
# Step 2: validate challenges
for type_identifier, auth in self.authorizations.items():
if auth['status'] == 'pending':
identifier_type, identifier = type_identifier.split(':', 1)
self._validate_challenges(identifier_type, identifier, auth)
def get_certificate(self):
'''
Request a new certificate and write it to the destination file.
First verifies whether all authorizations are valid; if not, aborts
with an error.
'''
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None:
raise ModuleFailException('Found no authorization information for "{0}"!'.format(identifier_type + ':' + identifier))
if 'status' not in auth:
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned no status')
if auth['status'] != 'valid':
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned status ' + str(auth['status']))
if self.version == 1:
cert = self._new_cert_v1()
else:
cert_uri = self._finalize_cert()
cert = self._download_cert(cert_uri)
if self.module.params['retrieve_all_alternates']:
alternate_chains = []
for alternate in cert['alternates']:
try:
alt_cert = self._download_cert(alternate)
except ModuleFailException as e:
self.module.warn('Error while downloading alternative certificate {0}: {1}'.format(alternate, e))
continue
alternate_chains.append(alt_cert)
self.all_chains = []
def _append_all_chains(cert_data):
self.all_chains.append(dict(
cert=cert_data['cert'].encode('utf8'),
chain=("\n".join(cert_data.get('chain', []))).encode('utf8'),
full_chain=(cert_data['cert'] + "\n".join(cert_data.get('chain', []))).encode('utf8'),
))
_append_all_chains(cert)
for alt_chain in alternate_chains:
_append_all_chains(alt_chain)
if cert['cert'] is not None:
pem_cert = cert['cert']
chain = [link for link in cert.get('chain', [])]
if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):
self.cert_days = get_cert_days(self.module, self.dest)
self.changed = True
if self.fullchain_dest and write_file(self.module, self.fullchain_dest, (pem_cert + "\n".join(chain)).encode('utf8')):
self.cert_days = get_cert_days(self.module, self.fullchain_dest)
self.changed = True
if self.chain_dest and write_file(self.module, self.chain_dest, ("\n".join(chain)).encode('utf8')):
self.changed = True
def deactivate_authzs(self):
'''
Deactivates all valid authz's. Does not raise exceptions.
https://community.letsencrypt.org/t/authorization-deactivation/19860/2
https://tools.ietf.org/html/rfc8555#section-7.5.2
'''
authz_deactivate = {
'status': 'deactivated'
}
if self.version == 1:
authz_deactivate['resource'] = 'authz'
if self.authorizations:
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None or auth.get('status') != 'valid':
continue
try:
result, info = self.account.send_signed_request(auth['uri'], authz_deactivate)
if 200 <= info['status'] < 300 and result.get('status') == 'deactivated':
auth['status'] = 'deactivated'
except Exception as dummy:
# Ignore errors on deactivating authzs
pass
if auth.get('status') != 'deactivated':
self.module.warn(warning='Could not deactivate authz object {0}.'.format(auth['uri']))
def main():
module = AnsibleModule(
argument_spec=dict(
account_key_src=dict(type='path', aliases=['account_key']),
account_key_content=dict(type='str', no_log=True),
account_uri=dict(type='str'),
modify_account=dict(type='bool', default=True),
acme_directory=dict(type='str', default='https://acme-staging.api.letsencrypt.org/directory'),
acme_version=dict(type='int', default=1, choices=[1, 2]),
validate_certs=dict(default=True, type='bool'),
account_email=dict(type='str'),
agreement=dict(type='str'),
terms_agreed=dict(type='bool', default=False),
challenge=dict(type='str', default='http-01', choices=['http-01', 'dns-01', 'tls-alpn-01']),
csr=dict(type='path', required=True, aliases=['src']),
data=dict(type='dict'),
dest=dict(type='path', aliases=['cert']),
fullchain_dest=dict(type='path', aliases=['fullchain']),
chain_dest=dict(type='path', aliases=['chain']),
remaining_days=dict(type='int', default=10),
deactivate_authzs=dict(type='bool', default=False),
force=dict(type='bool', default=False),
retrieve_all_alternates=dict(type='bool', default=False),
select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'openssl', 'cryptography']),
),
required_one_of=(
['account_key_src', 'account_key_content'],
['dest', 'fullchain_dest'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content'],
),
supports_check_mode=True,
)
if module._name == 'letsencrypt':
module.deprecate("The 'letsencrypt' module is being renamed 'acme_certificate'", version='2.10')
set_crypto_backend(module)
# AnsibleModule() changes the locale, so change it back to C because we rely on time.strptime() when parsing certificate dates.
module.run_command_environ_update = dict(LANG='C', LC_ALL='C', LC_MESSAGES='C', LC_CTYPE='C')
locale.setlocale(locale.LC_ALL, 'C')
if not module.params.get('validate_certs'):
module.warn(warning='Disabling certificate validation for communications with ACME endpoint. ' +
'This should only be done for testing against a local ACME server for ' +
'development purposes, but *never* for production purposes.')
try:
if module.params.get('dest'):
cert_days = get_cert_days(module, module.params['dest'])
else:
cert_days = get_cert_days(module, module.params['fullchain_dest'])
if module.params['force'] or cert_days < module.params['remaining_days']:
# If checkmode is active, base the changed state solely on the status
# of the certificate file as all other actions (accessing an account, checking
# the authorization status...) would lead to potential changes of the current
# state
if module.check_mode:
module.exit_json(changed=True, authorizations={}, challenge_data={}, cert_days=cert_days)
else:
client = ACMEClient(module)
client.cert_days = cert_days
other = dict()
if client.is_first_step():
# First run: start challenges / start new order
client.start_challenges()
else:
# Second run: finish challenges, and get certificate
try:
client.finish_challenges()
client.get_certificate()
if module.params['retrieve_all_alternates']:
other['all_chains'] = client.all_chains
finally:
if module.params['deactivate_authzs']:
client.deactivate_authzs()
data, data_dns = client.get_challenges_data()
auths = dict()
for k, v in client.authorizations.items():
# Remove "type:" from key
auths[k.split(':', 1)[1]] = v
module.exit_json(
changed=client.changed,
authorizations=auths,
finalize_uri=client.finalize_uri,
order_uri=client.order_uri,
account_uri=client.account.uri,
challenge_data=data,
challenge_data_dns=data_dns,
cert_days=client.cert_days,
**other
)
else:
module.exit_json(changed=False, cert_days=cert_days)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,976 |
win_group doesn't support check mode
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Support for running in check mode was added to `win_group` way back in #21384, but the flag was never enabled so the engine skips the entire module anyway.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_group
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.5.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = ['datadog_callback', 'profile_tasks']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 15
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/etc/ansible/hosts']
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_MODULE_PATH(/etc/ansible/ansible.cfg) = [u'/usr/share/ansible']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- win_group:
name: my_gorup
check_mode: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
(normal output of what would or wouldn't have been changed)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
skipping: [host] => {
"changed": false,
"msg": "remote module does not support check mode"
}
```
|
https://github.com/ansible/ansible/issues/61976
|
https://github.com/ansible/ansible/pull/61977
|
a0bec0bc327d29f446a031abc803b8d2cad1949f
|
d361ff412282fe9b4c701aba85a7b4846a29ee21
| 2019-09-08T20:37:48Z |
python
| 2019-09-15T20:45:43Z |
lib/ansible/modules/windows/win_group.ps1
|
#!powershell
# Copyright: (c) 2014, Chris Hoffman <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$params = Parse-Args $args;
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
$name = Get-AnsibleParam -obj $params -name "name" -type "str" -failifempty $true
$state = Get-AnsibleParam -obj $params -name "state" -type "str" -default "present" -validateset "present","absent"
$description = Get-AnsibleParam -obj $params -name "description" -type "str"
$result = @{
changed = $false
}
$adsi = [ADSI]"WinNT://$env:COMPUTERNAME"
$group = $adsi.Children | Where-Object {$_.SchemaClassName -eq 'group' -and $_.Name -eq $name }
try {
If ($state -eq "present") {
If (-not $group) {
If (-not $check_mode) {
$group = $adsi.Create("Group", $name)
$group.SetInfo()
}
$result.changed = $true
}
If ($null -ne $description) {
IF (-not $group.description -or $group.description -ne $description) {
$group.description = $description
If (-not $check_mode) {
$group.SetInfo()
}
$result.changed = $true
}
}
}
ElseIf ($state -eq "absent" -and $group) {
If (-not $check_mode) {
$adsi.delete("Group", $group.Name.Value)
}
$result.changed = $true
}
}
catch {
Fail-Json $result $_.Exception.Message
}
Exit-Json $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,463 |
win_find failed to check some files, these files were ignored and will not be part of the result output
|
##### SUMMARY
It appears that large folders break win_find when the size is too large to fit inside a System.Int32.
Further, win_find breaks as soon as a warning is encountered, causing it to no longer process folders.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_find.ps1
##### ANSIBLE VERSION
2.8.0
##### STEPS TO REPRODUCE
```
---
- name: Search for folders
win_find:
paths: C:\ProgramData
file_type: directory
```
##### ADDITIONAL INFO
Example of using the Get-FileStat function on a large folder.
PS C:\Windows\system32> $file = Get-Item -LiteralPath C:\ProgramData\Anaconda2 -Force
PS C:\Windows\system32> Get-FileStat -file $file
Cannot convert value "4444553383" to type "System.Int32". Error: "Value was either too large or too small for an Int32."
At line:186 char:13
[int]$specified_size = $matches[1]
CategoryInfo : InvalidArgument: (:) [], ParentContainsErrorRecordException
FullyQualifiedErrorId : InvalidCastFromStringToInteger
##### ACTUAL RESULTS
```
win_find failed to check some files, these files were ignored and will not be part of the result output
```
##### RECOMMENDED CHAGES
```
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int]$specified_size = $matches[1]
```
...................
TO
```
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int64]$specified_size = $matches[1]
```
|
https://github.com/ansible/ansible/issues/58463
|
https://github.com/ansible/ansible/pull/58466
|
d361ff412282fe9b4c701aba85a7b4846a29ee21
|
8def67939dbd5dbba84fe160f3ad187c76ebe63a
| 2019-06-27T15:32:52Z |
python
| 2019-09-15T23:04:59Z |
changelogs/fragments/58466-FIX_win_find-Bug-Get-FileStat_fails_on_large_files.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,463 |
win_find failed to check some files, these files were ignored and will not be part of the result output
|
##### SUMMARY
It appears that large folders break win_find when the size is too large to fit inside a System.Int32.
Further, win_find breaks as soon as a warning is encountered, causing it to no longer process folders.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_find.ps1
##### ANSIBLE VERSION
2.8.0
##### STEPS TO REPRODUCE
```
---
- name: Search for folders
win_find:
paths: C:\ProgramData
file_type: directory
```
##### ADDITIONAL INFO
Example of using the Get-FileStat function on a large folder.
PS C:\Windows\system32> $file = Get-Item -LiteralPath C:\ProgramData\Anaconda2 -Force
PS C:\Windows\system32> Get-FileStat -file $file
Cannot convert value "4444553383" to type "System.Int32". Error: "Value was either too large or too small for an Int32."
At line:186 char:13
[int]$specified_size = $matches[1]
CategoryInfo : InvalidArgument: (:) [], ParentContainsErrorRecordException
FullyQualifiedErrorId : InvalidCastFromStringToInteger
##### ACTUAL RESULTS
```
win_find failed to check some files, these files were ignored and will not be part of the result output
```
##### RECOMMENDED CHAGES
```
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int]$specified_size = $matches[1]
```
...................
TO
```
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int64]$specified_size = $matches[1]
```
|
https://github.com/ansible/ansible/issues/58463
|
https://github.com/ansible/ansible/pull/58466
|
d361ff412282fe9b4c701aba85a7b4846a29ee21
|
8def67939dbd5dbba84fe160f3ad187c76ebe63a
| 2019-06-27T15:32:52Z |
python
| 2019-09-15T23:04:59Z |
lib/ansible/modules/windows/win_find.ps1
|
#!powershell
# Copyright: (c) 2016, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$ErrorActionPreference = "Stop"
$params = Parse-Args -arguments $args -supports_check_mode $true
$_remote_tmp = Get-AnsibleParam $params "_ansible_remote_tmp" -type "path" -default $env:TMP
$paths = Get-AnsibleParam -obj $params -name 'paths' -failifempty $true
$age = Get-AnsibleParam -obj $params -name 'age'
$age_stamp = Get-AnsibleParam -obj $params -name 'age_stamp' -default 'mtime' -ValidateSet 'mtime','ctime','atime'
$file_type = Get-AnsibleParam -obj $params -name 'file_type' -default 'file' -ValidateSet 'file','directory'
$follow = Get-AnsibleParam -obj $params -name 'follow' -type "bool" -default $false
$hidden = Get-AnsibleParam -obj $params -name 'hidden' -type "bool" -default $false
$patterns = Get-AnsibleParam -obj $params -name 'patterns' -aliases "regex","regexp"
$recurse = Get-AnsibleParam -obj $params -name 'recurse' -type "bool" -default $false
$size = Get-AnsibleParam -obj $params -name 'size'
$use_regex = Get-AnsibleParam -obj $params -name 'use_regex' -type "bool" -default $false
$get_checksum = Get-AnsibleParam -obj $params -name 'get_checksum' -type "bool" -default $true
$checksum_algorithm = Get-AnsibleParam -obj $params -name 'checksum_algorithm' -default 'sha1' -ValidateSet 'md5', 'sha1', 'sha256', 'sha384', 'sha512'
$result = @{
files = @()
examined = 0
matched = 0
changed = $false
}
# C# code to determine link target, copied from http://chrisbensen.blogspot.com.au/2010/06/getfinalpathnamebyhandle.html
$symlink_util = @"
using System;
using System.Text;
using Microsoft.Win32.SafeHandles;
using System.ComponentModel;
using System.Runtime.InteropServices;
namespace Ansible.Command {
public class SymLinkHelper {
private const int FILE_SHARE_WRITE = 2;
private const int CREATION_DISPOSITION_OPEN_EXISTING = 3;
private const int FILE_FLAG_BACKUP_SEMANTICS = 0x02000000;
[DllImport("kernel32.dll", EntryPoint = "GetFinalPathNameByHandleW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern int GetFinalPathNameByHandle(IntPtr handle, [In, Out] StringBuilder path, int bufLen, int flags);
[DllImport("kernel32.dll", EntryPoint = "CreateFileW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern SafeFileHandle CreateFile(string lpFileName, int dwDesiredAccess,
int dwShareMode, IntPtr SecurityAttributes, int dwCreationDisposition, int dwFlagsAndAttributes, IntPtr hTemplateFile);
public static string GetSymbolicLinkTarget(System.IO.DirectoryInfo symlink) {
SafeFileHandle directoryHandle = CreateFile(symlink.FullName, 0, 2, System.IntPtr.Zero, CREATION_DISPOSITION_OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, System.IntPtr.Zero);
if(directoryHandle.IsInvalid)
throw new Win32Exception(Marshal.GetLastWin32Error());
StringBuilder path = new StringBuilder(512);
int size = GetFinalPathNameByHandle(directoryHandle.DangerousGetHandle(), path, path.Capacity, 0);
if (size<0)
throw new Win32Exception(Marshal.GetLastWin32Error()); // The remarks section of GetFinalPathNameByHandle mentions the return being prefixed with "\\?\" // More information about "\\?\" here -> http://msdn.microsoft.com/en-us/library/aa365247(v=VS.85).aspx
if (path[0] == '\\' && path[1] == '\\' && path[2] == '?' && path[3] == '\\')
return path.ToString().Substring(4);
else
return path.ToString();
}
}
}
"@
$original_tmp = $env:TMP
$env:TMP = $_remote_tmp
Add-Type -TypeDefinition $symlink_util
$env:TMP = $original_tmp
Function Assert-Age($info) {
$valid_match = $true
if ($null -ne $age) {
$seconds_per_unit = @{'s'=1; 'm'=60; 'h'=3600; 'd'=86400; 'w'=604800}
$seconds_pattern = '^(-?\d+)(s|m|h|d|w)?$'
$match = $age -match $seconds_pattern
if ($match) {
[int]$specified_seconds = $matches[1]
if ($null -eq $matches[2]) {
$chosen_unit = 's'
} else {
$chosen_unit = $matches[2]
}
$abs_seconds = $specified_seconds * ($seconds_per_unit.$chosen_unit)
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
if ($age_stamp -eq 'mtime') {
$age_comparison = $epoch.AddSeconds($info.lastwritetime)
} elseif ($age_stamp -eq 'ctime') {
$age_comparison = $epoch.AddSeconds($info.creationtime)
} elseif ($age_stamp -eq 'atime') {
$age_comparison = $epoch.AddSeconds($info.lastaccesstime)
}
if ($specified_seconds -ge 0) {
$start_date = (Get-Date).AddSeconds($abs_seconds * -1)
if ($age_comparison -gt $start_date) {
$valid_match = $false
}
} else {
$start_date = (Get-Date).AddSeconds($abs_seconds)
if ($age_comparison -lt $start_date) {
$valid_match = $false
}
}
} else {
throw "failed to process age for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileType($info) {
$valid_match = $true
if ($file_type -eq 'directory' -and $info.isdir -eq $false) {
$valid_match = $false
}
if ($file_type -eq 'file' -and $info.isdir -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Hidden($info) {
$valid_match = $true
if ($hidden -eq $true -and $info.ishidden -eq $false) {
$valid_match = $false
}
if ($hidden -eq $false -and $info.ishidden -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Pattern($info) {
$valid_match = $false
if ($null -ne $patterns) {
foreach ($pattern in $patterns) {
if ($use_regex -eq $true) {
# Use -match for regex matching
if ($info.filename -match $pattern) {
$valid_match = $true
}
} else {
# Use -like for wildcard matching
if ($info.filename -like $pattern) {
$valid_match = $true
}
}
}
} else {
$valid_match = $true
}
$valid_match
}
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int]$specified_size = $matches[1]
if ($null -eq $matches[2]) {
$chosen_byte = 'b'
} else {
$chosen_byte = $matches[2]
}
$abs_size = $specified_size * ($bytes_per_unit.$chosen_byte)
if ($specified_size -ge 0) {
if ($info.size -lt $abs_size) {
$valid_match = $false
}
} else {
if ($info.size -gt $abs_size * -1) {
$valid_match = $false
}
}
} else {
throw "failed to process size for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileStat($info) {
$age_match = Assert-Age -info $info
$file_type_match = Assert-FileType -info $info
$hidden_match = Assert-Hidden -info $info
$pattern_match = Assert-Pattern -info $info
$size_match = Assert-Size -info $info
if ($age_match -and $file_type_match -and $hidden_match -and $pattern_match -and $size_match) {
$info
} else {
$false
}
}
Function Get-FileStat($file) {
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
$access_control = $file.GetAccessControl()
$attributes = @()
foreach ($attribute in ($file.Attributes -split ',')) {
$attributes += $attribute.Trim()
}
$file_stat = @{
isreadonly = $attributes -contains 'ReadOnly'
ishidden = $attributes -contains 'Hidden'
isarchive = $attributes -contains 'Archive'
attributes = $file.Attributes.ToString()
owner = $access_control.Owner
lastwritetime = (New-TimeSpan -Start $epoch -End $file.LastWriteTime).TotalSeconds
creationtime = (New-TimeSpan -Start $epoch -End $file.CreationTime).TotalSeconds
lastaccesstime = (New-TimeSpan -Start $epoch -End $file.LastAccessTime).TotalSeconds
path = $file.FullName
filename = $file.Name
}
$islnk = $false
$isdir = $false
$isshared = $false
if ($attributes -contains 'ReparsePoint') {
# TODO: Find a way to differenciate between soft and junction links
$islnk = $true
$isdir = $true
# Try and get the symlink source, can result in failure if link is broken
try {
$lnk_source = [Ansible.Command.SymLinkHelper]::GetSymbolicLinkTarget($file)
$file_stat.lnk_source = $lnk_source
} catch {}
} elseif ($file.PSIsContainer) {
$isdir = $true
$share_info = Get-CIMInstance -Class Win32_Share -Filter "Path='$($file.Fullname -replace '\\', '\\')'"
if ($null -ne $share_info) {
$isshared = $true
$file_stat.sharename = $share_info.Name
}
# only get the size of a directory if there are files (not directories) inside the folder
# Get-ChildItem -LiteralPath does not work properly on older OS', use .NET instead
$dir_files = @()
try {
$dir_files = $file.EnumerateFiles("*", [System.IO.SearchOption]::AllDirectories)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
$size = 0
foreach ($dir_file in $dir_files) {
$size += $dir_file.Length
}
$file_stat.size = $size
} else {
$file_stat.size = $file.length
$file_stat.extension = $file.Extension
if ($get_checksum) {
try {
$checksum = Get-FileChecksum -path $path -algorithm $checksum_algorithm
$file_stat.checksum = $checksum
} catch {
throw "failed to get checksum for file $($file.FullName)"
}
}
}
$file_stat.islnk = $islnk
$file_stat.isdir = $isdir
$file_stat.isshared = $isshared
Assert-FileStat -info $file_stat
}
Function Get-FilesInFolder($path) {
$items = @()
# Get-ChildItem -LiteralPath can bomb out on older OS', use .NET instead
$dir = New-Object -TypeName System.IO.DirectoryInfo -ArgumentList $path
$dir_files = @()
try {
$dir_files = $dir.EnumerateFileSystemInfos("*", [System.IO.SearchOption]::TopDirectoryOnly)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
foreach ($item in $dir_files) {
if ($item -is [System.IO.DirectoryInfo] -and $recurse) {
if (($item.Attributes -like '*ReparsePoint*' -and $follow) -or ($item.Attributes -notlike '*ReparsePoint*')) {
# File is a link and we want to follow a link OR file is not a link
$items += $item.FullName
$items += Get-FilesInFolder -path $item.FullName
} else {
# File is a link but we don't want to follow a link
$items += $item.FullName
}
} else {
$items += $item.FullName
}
}
$items
}
$paths_to_check = @()
foreach ($path in $paths) {
if (Test-Path -LiteralPath $path) {
if ((Get-Item -LiteralPath $path -Force).PSIsContainer) {
$paths_to_check += Get-FilesInFolder -path $path
} else {
Fail-Json $result "Argument path $path is a file not a directory"
}
} else {
Fail-Json $result "Argument path $path does not exist cannot get information on"
}
}
$paths_to_check = $paths_to_check | Select-Object -Unique | Sort-Object
foreach ($path in $paths_to_check) {
try {
$file = Get-Item -LiteralPath $path -Force
$info = Get-FileStat -file $file
} catch {
Add-Warning -obj $result -message "win_find failed to check some files, these files were ignored and will not be part of the result output"
break
}
$new_examined = $result.examined + 1
$result.examined = $new_examined
if ($info -ne $false) {
$files = $result.Files
$files += $info
$new_matched = $result.matched + 1
$result.matched = $new_matched
$result.files = $files
}
}
Exit-Json $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,511 |
win_find module not returning deduplicated files
|
##### SUMMARY
Using the win_find module against a Windows share it is not returning some files. In this case .exe files.
The .exe files are deduplicated and looking at their properties in Windows explorer, they appear as something like this:
'Size: 100MB'
'Size on disk: 0 bytes'.
win_find is successfully returning other (non deduplicated) files from the same directory and it is showing the correct number of files in the directory (the number of files including the deduplicated .exe files).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_find.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/playbooks/ansible-venv/lib/python2.7/site-packages/ansible
executable location = /root/playbooks/ansible-venv/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible Server: RHEL7
Target Host: Win20126
Share Server: Win2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get list of exes from share with deduplication
debugger: always
win_find:
paths: \\server\share\
patterns: '*.exe' # When uncommented this returns no files from the share.
# When commented it returns everything but the deduplicated files (exes in this case)
# The debug output then shows the list of files found plus; "examined": 29, "matched": 13
# Still missing the exe files from the output...
user_name: user
user_password: pass
register: list_of_exes
become: true
become_method: runas
vars:
ansible_become_user: '{{ ansible_user }}'
ansible_become_pass: '{{ ansible_password }}'
- debug:
var: list_of_exes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Would expect the results to include the deduplicated files
```
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [...],
"matched": 13
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook executes ok but does not return the deduplicated files.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [],
"matched": 0
}
}
```
|
https://github.com/ansible/ansible/issues/58511
|
https://github.com/ansible/ansible/pull/58680
|
8def67939dbd5dbba84fe160f3ad187c76ebe63a
|
99796dfa87731b5dc210e7649bfe09664c99ffcd
| 2019-06-28T16:17:37Z |
python
| 2019-09-16T00:02:05Z |
changelogs/fragments/win_find-fix-ignore-of-deduped-files.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,511 |
win_find module not returning deduplicated files
|
##### SUMMARY
Using the win_find module against a Windows share it is not returning some files. In this case .exe files.
The .exe files are deduplicated and looking at their properties in Windows explorer, they appear as something like this:
'Size: 100MB'
'Size on disk: 0 bytes'.
win_find is successfully returning other (non deduplicated) files from the same directory and it is showing the correct number of files in the directory (the number of files including the deduplicated .exe files).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_find.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/playbooks/ansible-venv/lib/python2.7/site-packages/ansible
executable location = /root/playbooks/ansible-venv/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible Server: RHEL7
Target Host: Win20126
Share Server: Win2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get list of exes from share with deduplication
debugger: always
win_find:
paths: \\server\share\
patterns: '*.exe' # When uncommented this returns no files from the share.
# When commented it returns everything but the deduplicated files (exes in this case)
# The debug output then shows the list of files found plus; "examined": 29, "matched": 13
# Still missing the exe files from the output...
user_name: user
user_password: pass
register: list_of_exes
become: true
become_method: runas
vars:
ansible_become_user: '{{ ansible_user }}'
ansible_become_pass: '{{ ansible_password }}'
- debug:
var: list_of_exes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Would expect the results to include the deduplicated files
```
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [...],
"matched": 13
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook executes ok but does not return the deduplicated files.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [],
"matched": 0
}
}
```
|
https://github.com/ansible/ansible/issues/58511
|
https://github.com/ansible/ansible/pull/58680
|
8def67939dbd5dbba84fe160f3ad187c76ebe63a
|
99796dfa87731b5dc210e7649bfe09664c99ffcd
| 2019-06-28T16:17:37Z |
python
| 2019-09-16T00:02:05Z |
lib/ansible/modules/windows/win_find.ps1
|
#!powershell
# Copyright: (c) 2016, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$ErrorActionPreference = "Stop"
$params = Parse-Args -arguments $args -supports_check_mode $true
$_remote_tmp = Get-AnsibleParam $params "_ansible_remote_tmp" -type "path" -default $env:TMP
$paths = Get-AnsibleParam -obj $params -name 'paths' -failifempty $true
$age = Get-AnsibleParam -obj $params -name 'age'
$age_stamp = Get-AnsibleParam -obj $params -name 'age_stamp' -default 'mtime' -ValidateSet 'mtime','ctime','atime'
$file_type = Get-AnsibleParam -obj $params -name 'file_type' -default 'file' -ValidateSet 'file','directory'
$follow = Get-AnsibleParam -obj $params -name 'follow' -type "bool" -default $false
$hidden = Get-AnsibleParam -obj $params -name 'hidden' -type "bool" -default $false
$patterns = Get-AnsibleParam -obj $params -name 'patterns' -aliases "regex","regexp"
$recurse = Get-AnsibleParam -obj $params -name 'recurse' -type "bool" -default $false
$size = Get-AnsibleParam -obj $params -name 'size'
$use_regex = Get-AnsibleParam -obj $params -name 'use_regex' -type "bool" -default $false
$get_checksum = Get-AnsibleParam -obj $params -name 'get_checksum' -type "bool" -default $true
$checksum_algorithm = Get-AnsibleParam -obj $params -name 'checksum_algorithm' -default 'sha1' -ValidateSet 'md5', 'sha1', 'sha256', 'sha384', 'sha512'
$result = @{
files = @()
examined = 0
matched = 0
changed = $false
}
# C# code to determine link target, copied from http://chrisbensen.blogspot.com.au/2010/06/getfinalpathnamebyhandle.html
$symlink_util = @"
using System;
using System.Text;
using Microsoft.Win32.SafeHandles;
using System.ComponentModel;
using System.Runtime.InteropServices;
namespace Ansible.Command {
public class SymLinkHelper {
private const int FILE_SHARE_WRITE = 2;
private const int CREATION_DISPOSITION_OPEN_EXISTING = 3;
private const int FILE_FLAG_BACKUP_SEMANTICS = 0x02000000;
[DllImport("kernel32.dll", EntryPoint = "GetFinalPathNameByHandleW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern int GetFinalPathNameByHandle(IntPtr handle, [In, Out] StringBuilder path, int bufLen, int flags);
[DllImport("kernel32.dll", EntryPoint = "CreateFileW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern SafeFileHandle CreateFile(string lpFileName, int dwDesiredAccess,
int dwShareMode, IntPtr SecurityAttributes, int dwCreationDisposition, int dwFlagsAndAttributes, IntPtr hTemplateFile);
public static string GetSymbolicLinkTarget(System.IO.DirectoryInfo symlink) {
SafeFileHandle directoryHandle = CreateFile(symlink.FullName, 0, 2, System.IntPtr.Zero, CREATION_DISPOSITION_OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, System.IntPtr.Zero);
if(directoryHandle.IsInvalid)
throw new Win32Exception(Marshal.GetLastWin32Error());
StringBuilder path = new StringBuilder(512);
int size = GetFinalPathNameByHandle(directoryHandle.DangerousGetHandle(), path, path.Capacity, 0);
if (size<0)
throw new Win32Exception(Marshal.GetLastWin32Error()); // The remarks section of GetFinalPathNameByHandle mentions the return being prefixed with "\\?\" // More information about "\\?\" here -> http://msdn.microsoft.com/en-us/library/aa365247(v=VS.85).aspx
if (path[0] == '\\' && path[1] == '\\' && path[2] == '?' && path[3] == '\\')
return path.ToString().Substring(4);
else
return path.ToString();
}
}
}
"@
$original_tmp = $env:TMP
$env:TMP = $_remote_tmp
Add-Type -TypeDefinition $symlink_util
$env:TMP = $original_tmp
Function Assert-Age($info) {
$valid_match = $true
if ($null -ne $age) {
$seconds_per_unit = @{'s'=1; 'm'=60; 'h'=3600; 'd'=86400; 'w'=604800}
$seconds_pattern = '^(-?\d+)(s|m|h|d|w)?$'
$match = $age -match $seconds_pattern
if ($match) {
[int]$specified_seconds = $matches[1]
if ($null -eq $matches[2]) {
$chosen_unit = 's'
} else {
$chosen_unit = $matches[2]
}
$abs_seconds = $specified_seconds * ($seconds_per_unit.$chosen_unit)
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
if ($age_stamp -eq 'mtime') {
$age_comparison = $epoch.AddSeconds($info.lastwritetime)
} elseif ($age_stamp -eq 'ctime') {
$age_comparison = $epoch.AddSeconds($info.creationtime)
} elseif ($age_stamp -eq 'atime') {
$age_comparison = $epoch.AddSeconds($info.lastaccesstime)
}
if ($specified_seconds -ge 0) {
$start_date = (Get-Date).AddSeconds($abs_seconds * -1)
if ($age_comparison -gt $start_date) {
$valid_match = $false
}
} else {
$start_date = (Get-Date).AddSeconds($abs_seconds)
if ($age_comparison -lt $start_date) {
$valid_match = $false
}
}
} else {
throw "failed to process age for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileType($info) {
$valid_match = $true
if ($file_type -eq 'directory' -and $info.isdir -eq $false) {
$valid_match = $false
}
if ($file_type -eq 'file' -and $info.isdir -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Hidden($info) {
$valid_match = $true
if ($hidden -eq $true -and $info.ishidden -eq $false) {
$valid_match = $false
}
if ($hidden -eq $false -and $info.ishidden -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Pattern($info) {
$valid_match = $false
if ($null -ne $patterns) {
foreach ($pattern in $patterns) {
if ($use_regex -eq $true) {
# Use -match for regex matching
if ($info.filename -match $pattern) {
$valid_match = $true
}
} else {
# Use -like for wildcard matching
if ($info.filename -like $pattern) {
$valid_match = $true
}
}
}
} else {
$valid_match = $true
}
$valid_match
}
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int64]$specified_size = $matches[1]
if ($null -eq $matches[2]) {
$chosen_byte = 'b'
} else {
$chosen_byte = $matches[2]
}
$abs_size = $specified_size * ($bytes_per_unit.$chosen_byte)
if ($specified_size -ge 0) {
if ($info.size -lt $abs_size) {
$valid_match = $false
}
} else {
if ($info.size -gt $abs_size * -1) {
$valid_match = $false
}
}
} else {
throw "failed to process size for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileStat($info) {
$age_match = Assert-Age -info $info
$file_type_match = Assert-FileType -info $info
$hidden_match = Assert-Hidden -info $info
$pattern_match = Assert-Pattern -info $info
$size_match = Assert-Size -info $info
if ($age_match -and $file_type_match -and $hidden_match -and $pattern_match -and $size_match) {
$info
} else {
$false
}
}
Function Get-FileStat($file) {
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
$access_control = $file.GetAccessControl()
$attributes = @()
foreach ($attribute in ($file.Attributes -split ',')) {
$attributes += $attribute.Trim()
}
$file_stat = @{
isreadonly = $attributes -contains 'ReadOnly'
ishidden = $attributes -contains 'Hidden'
isarchive = $attributes -contains 'Archive'
attributes = $file.Attributes.ToString()
owner = $access_control.Owner
lastwritetime = (New-TimeSpan -Start $epoch -End $file.LastWriteTime).TotalSeconds
creationtime = (New-TimeSpan -Start $epoch -End $file.CreationTime).TotalSeconds
lastaccesstime = (New-TimeSpan -Start $epoch -End $file.LastAccessTime).TotalSeconds
path = $file.FullName
filename = $file.Name
}
$islnk = $false
$isdir = $false
$isshared = $false
if ($attributes -contains 'ReparsePoint') {
# TODO: Find a way to differenciate between soft and junction links
$islnk = $true
$isdir = $true
# Try and get the symlink source, can result in failure if link is broken
try {
$lnk_source = [Ansible.Command.SymLinkHelper]::GetSymbolicLinkTarget($file)
$file_stat.lnk_source = $lnk_source
} catch {}
} elseif ($file.PSIsContainer) {
$isdir = $true
$share_info = Get-CIMInstance -Class Win32_Share -Filter "Path='$($file.Fullname -replace '\\', '\\')'"
if ($null -ne $share_info) {
$isshared = $true
$file_stat.sharename = $share_info.Name
}
# only get the size of a directory if there are files (not directories) inside the folder
# Get-ChildItem -LiteralPath does not work properly on older OS', use .NET instead
$dir_files = @()
try {
$dir_files = $file.EnumerateFiles("*", [System.IO.SearchOption]::AllDirectories)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
$size = 0
foreach ($dir_file in $dir_files) {
$size += $dir_file.Length
}
$file_stat.size = $size
} else {
$file_stat.size = $file.length
$file_stat.extension = $file.Extension
if ($get_checksum) {
try {
$checksum = Get-FileChecksum -path $path -algorithm $checksum_algorithm
$file_stat.checksum = $checksum
} catch {
throw "failed to get checksum for file $($file.FullName)"
}
}
}
$file_stat.islnk = $islnk
$file_stat.isdir = $isdir
$file_stat.isshared = $isshared
Assert-FileStat -info $file_stat
}
Function Get-FilesInFolder($path) {
$items = @()
# Get-ChildItem -LiteralPath can bomb out on older OS', use .NET instead
$dir = New-Object -TypeName System.IO.DirectoryInfo -ArgumentList $path
$dir_files = @()
try {
$dir_files = $dir.EnumerateFileSystemInfos("*", [System.IO.SearchOption]::TopDirectoryOnly)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
foreach ($item in $dir_files) {
if ($item -is [System.IO.DirectoryInfo] -and $recurse) {
if (($item.Attributes -like '*ReparsePoint*' -and $follow) -or ($item.Attributes -notlike '*ReparsePoint*')) {
# File is a link and we want to follow a link OR file is not a link
$items += $item.FullName
$items += Get-FilesInFolder -path $item.FullName
} else {
# File is a link but we don't want to follow a link
$items += $item.FullName
}
} else {
$items += $item.FullName
}
}
$items
}
$paths_to_check = @()
foreach ($path in $paths) {
if (Test-Path -LiteralPath $path) {
if ((Get-Item -LiteralPath $path -Force).PSIsContainer) {
$paths_to_check += Get-FilesInFolder -path $path
} else {
Fail-Json $result "Argument path $path is a file not a directory"
}
} else {
Fail-Json $result "Argument path $path does not exist cannot get information on"
}
}
$paths_to_check = $paths_to_check | Select-Object -Unique | Sort-Object
foreach ($path in $paths_to_check) {
try {
$file = Get-Item -LiteralPath $path -Force
$info = Get-FileStat -file $file
} catch {
Add-Warning -obj $result -message "win_find failed to check some files, these files were ignored and will not be part of the result output"
break
}
$new_examined = $result.examined + 1
$result.examined = $new_examined
if ($info -ne $false) {
$files = $result.Files
$files += $info
$new_matched = $result.matched + 1
$result.matched = $new_matched
$result.files = $files
}
}
Exit-Json $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,511 |
win_find module not returning deduplicated files
|
##### SUMMARY
Using the win_find module against a Windows share it is not returning some files. In this case .exe files.
The .exe files are deduplicated and looking at their properties in Windows explorer, they appear as something like this:
'Size: 100MB'
'Size on disk: 0 bytes'.
win_find is successfully returning other (non deduplicated) files from the same directory and it is showing the correct number of files in the directory (the number of files including the deduplicated .exe files).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_find.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/playbooks/ansible-venv/lib/python2.7/site-packages/ansible
executable location = /root/playbooks/ansible-venv/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible Server: RHEL7
Target Host: Win20126
Share Server: Win2016
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get list of exes from share with deduplication
debugger: always
win_find:
paths: \\server\share\
patterns: '*.exe' # When uncommented this returns no files from the share.
# When commented it returns everything but the deduplicated files (exes in this case)
# The debug output then shows the list of files found plus; "examined": 29, "matched": 13
# Still missing the exe files from the output...
user_name: user
user_password: pass
register: list_of_exes
become: true
become_method: runas
vars:
ansible_become_user: '{{ ansible_user }}'
ansible_become_pass: '{{ ansible_password }}'
- debug:
var: list_of_exes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Would expect the results to include the deduplicated files
```
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [...],
"matched": 13
}
}
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Playbook executes ok but does not return the deduplicated files.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [debug] ********************************************************************************************************
ok: [169.254.0.1] => {
"list_of_exes": {
"changed": false,
"examined": 29,
"failed": false,
"files": [],
"matched": 0
}
}
```
|
https://github.com/ansible/ansible/issues/58511
|
https://github.com/ansible/ansible/pull/58680
|
8def67939dbd5dbba84fe160f3ad187c76ebe63a
|
99796dfa87731b5dc210e7649bfe09664c99ffcd
| 2019-06-28T16:17:37Z |
python
| 2019-09-16T00:02:05Z |
lib/ansible/modules/windows/win_find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_find
version_added: "2.3"
short_description: Return a list of files based on specific criteria
description:
- Return a list of files based on specified criteria.
- Multiple criteria are AND'd together.
- For non-Windows targets, use the M(find) module instead.
options:
age:
description:
- Select files or folders whose age is equal to or greater than
the specified time.
- Use a negative age to find files equal to or less than
the specified time.
- You can choose seconds, minutes, hours, days or weeks
by specifying the first letter of an of
those words (e.g., "2s", "10d", 1w").
type: str
age_stamp:
description:
- Choose the file property against which we compare C(age).
- The default attribute we compare with is the last modification time.
type: str
choices: [ atime, ctime, mtime ]
default: mtime
checksum_algorithm:
description:
- Algorithm to determine the checksum of a file.
- Will throw an error if the host is unable to use specified algorithm.
type: str
choices: [ md5, sha1, sha256, sha384, sha512 ]
default: sha1
file_type:
description: Type of file to search for.
type: str
choices: [ directory, file ]
default: file
follow:
description:
- Set this to C(yes) to follow symlinks in the path.
- This needs to be used in conjunction with C(recurse).
type: bool
default: no
get_checksum:
description:
- Whether to return a checksum of the file in the return info (default sha1),
use C(checksum_algorithm) to change from the default.
type: bool
default: yes
hidden:
description: Set this to include hidden files or folders.
type: bool
default: no
paths:
description:
- List of paths of directories to search for files or folders in.
- This can be supplied as a single path or a list of paths.
type: list
required: yes
patterns:
description:
- One or more (powershell or regex) patterns to compare filenames with.
- The type of pattern matching is controlled by C(use_regex) option.
- The patterns restrict the list of files or folders to be returned based on the filenames.
- For a file to be matched it only has to match with one pattern in a list provided.
type: list
aliases: [ "regex", "regexp" ]
recurse:
description:
- Will recursively descend into the directory looking for files or folders.
type: bool
default: no
size:
description:
- Select files or folders whose size is equal to or greater than the specified size.
- Use a negative value to find files equal to or less than the specified size.
- You can specify the size with a suffix of the byte type i.e. kilo = k, mega = m...
- Size is not evaluated for symbolic links.
type: str
use_regex:
description:
- Will set patterns to run as a regex check if set to C(yes).
type: bool
default: no
author:
- Jordan Borean (@jborean93)
'''
EXAMPLES = r'''
- name: Find files in path
win_find:
paths: D:\Temp
- name: Find hidden files in path
win_find:
paths: D:\Temp
hidden: yes
- name: Find files in multiple paths
win_find:
paths:
- C:\Temp
- D:\Temp
- name: Find files in directory while searching recursively
win_find:
paths: D:\Temp
recurse: yes
- name: Find files in directory while following symlinks
win_find:
paths: D:\Temp
recurse: yes
follow: yes
- name: Find files with .log and .out extension using powershell wildcards
win_find:
paths: D:\Temp
patterns: [ '*.log', '*.out' ]
- name: Find files in path based on regex pattern
win_find:
paths: D:\Temp
patterns: out_\d{8}-\d{6}.log
- name: Find files older than 1 day
win_find:
paths: D:\Temp
age: 86400
- name: Find files older than 1 day based on create time
win_find:
paths: D:\Temp
age: 86400
age_stamp: ctime
- name: Find files older than 1 day with unit syntax
win_find:
paths: D:\Temp
age: 1d
- name: Find files newer than 1 hour
win_find:
paths: D:\Temp
age: -3600
- name: Find files newer than 1 hour with unit syntax
win_find:
paths: D:\Temp
age: -1h
- name: Find files larger than 1MB
win_find:
paths: D:\Temp
size: 1048576
- name: Find files larger than 1GB with unit syntax
win_find:
paths: D:\Temp
size: 1g
- name: Find files smaller than 1MB
win_find:
paths: D:\Temp
size: -1048576
- name: Find files smaller than 1GB with unit syntax
win_find:
paths: D:\Temp
size: -1g
- name: Find folders/symlinks in multiple paths
win_find:
paths:
- C:\Temp
- D:\Temp
file_type: directory
- name: Find files and return SHA256 checksum of files found
win_find:
paths: C:\Temp
get_checksum: yes
checksum_algorithm: sha256
- name: Find files and do not return the checksum
win_find:
paths: C:\Temp
get_checksum: no
'''
RETURN = r'''
examined:
description: The number of files/folders that was checked.
returned: always
type: int
sample: 10
matched:
description: The number of files/folders that match the criteria.
returned: always
type: int
sample: 2
files:
description: Information on the files/folders that match the criteria returned as a list of dictionary elements
for each file matched. The entries are sorted by the path value alphabetically.
returned: success
type: complex
contains:
attributes:
description: attributes of the file at path in raw form.
returned: success, path exists
type: str
sample: "Archive, Hidden"
checksum:
description: The checksum of a file based on checksum_algorithm specified.
returned: success, path exists, path is a file, get_checksum == True
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
creationtime:
description: The create time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
extension:
description: The extension of the file at path.
returned: success, path exists, path is a file
type: str
sample: ".ps1"
filename:
description: The name of the file.
returned: success, path exists
type: str
sample: temp
isarchive:
description: If the path is ready for archiving or not.
returned: success, path exists
type: bool
sample: true
isdir:
description: If the path is a directory or not.
returned: success, path exists
type: bool
sample: true
ishidden:
description: If the path is hidden or not.
returned: success, path exists
type: bool
sample: true
islnk:
description: If the path is a symbolic link or junction or not.
returned: success, path exists
type: bool
sample: true
isreadonly:
description: If the path is read only or not.
returned: success, path exists
type: bool
sample: true
isshared:
description: If the path is shared or not.
returned: success, path exists
type: bool
sample: true
lastaccesstime:
description: The last access time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lastwritetime:
description: The last modification time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lnk_source:
description: The target of the symbolic link, will return null if not a link or the link is broken.
return: success, path exists, path is a symbolic link
type: str
sample: C:\temp
owner:
description: The owner of the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
path:
description: The full absolute path to the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
sharename:
description: The name of share if folder is shared.
returned: success, path exists, path is a directory and isshared == True
type: str
sample: file-share
size:
description: The size in bytes of a file or folder.
returned: success, path exists, path is not a link
type: int
sample: 1024
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,302 |
win_format fails on second run
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using the win_format module a run of the playbook will fail if the filesystem has files in it. The module is failing with the error: `Force format must be specified to format non-pristine volumes`. Because the module fails it cannot be used in an idempotent manner. This prevents a playbook from running successfully if there are files in a previously formatted filesystem.
I don't see why the module care if there are file present in the filesystem if all of the format options (fstype, allocation_unit_size, etc. are the same).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_format
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/bevans/Documents/consulting_engagements/mt_bank/2019-06_ansible_windows/ansible.cfg
configured module search path = ['/home/bevans/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, May 11 2019, 00:38:04) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Windows 10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the playbook to format an unformatted partition
2. Create a file on the newly formatted partition
3. Re-run the playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
gather_facts: false
tasks:
- win_partition:
disk_number: 1
drive_letter: F
partition_size: -1
- win_format:
drive_letter: F
file_system: NTFS
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The first run of the playbook results in the task being changed. On the second run the task should be ok.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The first run of the playbook successfully formatted the partition, but the second run (after files have been created on the filesystem) fails.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
changed: [win10]
TASK [win_format] ****************************************************************************************************************************
changed: [win10]
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
ok: [win10]
TASK [win_format] ****************************************************************************************************************************
fatal: [win10]: FAILED! => {"changed": false, "msg": "Force format must be specified to format non-pristine volumes"}
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/58302
|
https://github.com/ansible/ansible/pull/59819
|
c7662d8b2f0d1fd954b8026442af3ef1497e99d5
|
74a3eec1d9afc3911a6c41efb11cf6f8a3474796
| 2019-06-24T20:06:58Z |
python
| 2019-09-16T02:45:44Z |
changelogs/fragments/win_format-Idem-not-working-if-file-exist-but-same-fs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,302 |
win_format fails on second run
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using the win_format module a run of the playbook will fail if the filesystem has files in it. The module is failing with the error: `Force format must be specified to format non-pristine volumes`. Because the module fails it cannot be used in an idempotent manner. This prevents a playbook from running successfully if there are files in a previously formatted filesystem.
I don't see why the module care if there are file present in the filesystem if all of the format options (fstype, allocation_unit_size, etc. are the same).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_format
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/bevans/Documents/consulting_engagements/mt_bank/2019-06_ansible_windows/ansible.cfg
configured module search path = ['/home/bevans/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, May 11 2019, 00:38:04) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Windows 10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the playbook to format an unformatted partition
2. Create a file on the newly formatted partition
3. Re-run the playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
gather_facts: false
tasks:
- win_partition:
disk_number: 1
drive_letter: F
partition_size: -1
- win_format:
drive_letter: F
file_system: NTFS
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The first run of the playbook results in the task being changed. On the second run the task should be ok.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The first run of the playbook successfully formatted the partition, but the second run (after files have been created on the filesystem) fails.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
changed: [win10]
TASK [win_format] ****************************************************************************************************************************
changed: [win10]
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
ok: [win10]
TASK [win_format] ****************************************************************************************************************************
fatal: [win10]: FAILED! => {"changed": false, "msg": "Force format must be specified to format non-pristine volumes"}
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/58302
|
https://github.com/ansible/ansible/pull/59819
|
c7662d8b2f0d1fd954b8026442af3ef1497e99d5
|
74a3eec1d9afc3911a6c41efb11cf6f8a3474796
| 2019-06-24T20:06:58Z |
python
| 2019-09-16T02:45:44Z |
lib/ansible/modules/windows/win_format.ps1
|
#!powershell
# Copyright: (c) 2019, Varun Chopra (@chopraaa) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -OSVersion 6.2
Set-StrictMode -Version 2
$ErrorActionPreference = "Stop"
$spec = @{
options = @{
drive_letter = @{ type = "str" }
path = @{ type = "str" }
label = @{ type = "str" }
new_label = @{ type = "str" }
file_system = @{ type = "str"; choices = "ntfs", "refs", "exfat", "fat32", "fat" }
allocation_unit_size = @{ type = "int" }
large_frs = @{ type = "bool" }
full = @{ type = "bool"; default = $false }
compress = @{ type = "bool" }
integrity_streams = @{ type = "bool" }
force = @{ type = "bool"; default = $false }
}
mutually_exclusive = @(
,@('drive_letter', 'path', 'label')
)
required_one_of = @(
,@('drive_letter', 'path', 'label')
)
supports_check_mode = $true
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$drive_letter = $module.Params.drive_letter
$path = $module.Params.path
$label = $module.Params.label
$new_label = $module.Params.new_label
$file_system = $module.Params.file_system
$allocation_unit_size = $module.Params.allocation_unit_size
$large_frs = $module.Params.large_frs
$full_format = $module.Params.full
$compress_volume = $module.Params.compress
$integrity_streams = $module.Params.integrity_streams
$force_format = $module.Params.force
# Some pre-checks
if ($null -ne $drive_letter -and $drive_letter -notmatch "^[a-zA-Z]$") {
$module.FailJson("The parameter drive_letter should be a single character A-Z")
}
if ($integrity_streams -eq $true -and $file_system -ne "refs") {
$module.FailJson("Integrity streams can be enabled only on ReFS volumes. You specified: $($file_system)")
}
if ($compress_volume -eq $true) {
if ($file_system -eq "ntfs") {
if ($null -ne $allocation_unit_size -and $allocation_unit_size -gt 4096) {
$module.FailJson("NTFS compression is not supported for allocation unit sizes above 4096")
}
}
else {
$module.FailJson("Compression can be enabled only on NTFS volumes. You specified: $($file_system)")
}
}
function Get-AnsibleVolume {
param(
$DriveLetter,
$Path,
$Label
)
if ($null -ne $DriveLetter) {
try {
$volume = Get-Volume -DriveLetter $DriveLetter
} catch {
$module.FailJson("There was an error retrieving the volume using drive_letter $($DriveLetter): $($_.Exception.Message)", $_)
}
}
elseif ($null -ne $Path) {
try {
$volume = Get-Volume -Path $Path
} catch {
$module.FailJson("There was an error retrieving the volume using path $($Path): $($_.Exception.Message)", $_)
}
}
elseif ($null -ne $Label) {
try {
$volume = Get-Volume -FileSystemLabel $Label
} catch {
$module.FailJson("There was an error retrieving the volume using label $($Label): $($_.Exception.Message)", $_)
}
}
else {
$module.FailJson("Unable to locate volume: drive_letter, path and label were not specified")
}
return $volume
}
function Format-AnsibleVolume {
param(
$Path,
$Label,
$FileSystem,
$Full,
$UseLargeFRS,
$Compress,
$SetIntegrityStreams
)
$parameters = @{
Path = $Path
Full = $Full
}
if ($null -ne $UseLargeFRS) {
$parameters.Add("UseLargeFRS", $UseLargeFRS)
}
if ($null -ne $SetIntegrityStreams) {
$parameters.Add("SetIntegrityStreams", $SetIntegrityStreams)
}
if ($null -ne $Compress){
$parameters.Add("Compress", $Compress)
}
if ($null -ne $Label) {
$parameters.Add("NewFileSystemLabel", $Label)
}
if ($null -ne $FileSystem) {
$parameters.Add("FileSystem", $FileSystem)
}
Format-Volume @parameters -Confirm:$false | Out-Null
}
$ansible_volume = Get-AnsibleVolume -DriveLetter $drive_letter -Path $path -Label $label
$ansible_file_system = $ansible_volume.FileSystem
$ansible_volume_size = $ansible_volume.Size
$ansible_partition = Get-Partition -Volume $ansible_volume
foreach ($access_path in $ansible_partition.AccessPaths) {
if ($access_path -ne $Path) {
$files_in_volume = (Get-ChildItem -LiteralPath $access_path -ErrorAction SilentlyContinue | Measure-Object).Count
if (-not $force_format -and $files_in_volume -gt 0) {
$module.FailJson("Force format must be specified to format non-pristine volumes")
} else {
if (-not $force_format -and
-not $null -eq $file_system -and
-not [string]::IsNullOrEmpty($ansible_file_system) -and
$file_system -ne $ansible_file_system) {
$module.FailJson("Force format must be specified since target file system: $($file_system) is different from the current file system of the volume: $($ansible_file_system.ToLower())")
} else {
$pristine = $true
}
}
}
}
if ($force_format) {
if (-not $module.CheckMode) {
Format-AnsibleVolume -Path $ansible_volume.Path -Full $full_format -Label $new_label -FileSystem $file_system -SetIntegrityStreams $integrity_streams -UseLargeFRS $large_frs -Compress $compress_volume
}
$module.Result.changed = $true
}
else {
if ($pristine) {
if ($null -eq $new_label) {
$new_label = $ansible_volume.FileSystemLabel
}
# Conditions for formatting
if ($ansible_volume_size -eq 0 -or
$ansible_volume.FileSystemLabel -ne $new_label) {
if (-not $module.CheckMode) {
Format-AnsibleVolume -Path $ansible_volume.Path -Full $full_format -Label $new_label -FileSystem $file_system -SetIntegrityStreams $integrity_streams -UseLargeFRS $large_frs -Compress $compress_volume
}
$module.Result.changed = $true
}
}
}
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,302 |
win_format fails on second run
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using the win_format module a run of the playbook will fail if the filesystem has files in it. The module is failing with the error: `Force format must be specified to format non-pristine volumes`. Because the module fails it cannot be used in an idempotent manner. This prevents a playbook from running successfully if there are files in a previously formatted filesystem.
I don't see why the module care if there are file present in the filesystem if all of the format options (fstype, allocation_unit_size, etc. are the same).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_format
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/bevans/Documents/consulting_engagements/mt_bank/2019-06_ansible_windows/ansible.cfg
configured module search path = ['/home/bevans/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, May 11 2019, 00:38:04) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS is Windows 10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the playbook to format an unformatted partition
2. Create a file on the newly formatted partition
3. Re-run the playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
gather_facts: false
tasks:
- win_partition:
disk_number: 1
drive_letter: F
partition_size: -1
- win_format:
drive_letter: F
file_system: NTFS
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The first run of the playbook results in the task being changed. On the second run the task should be ok.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The first run of the playbook successfully formatted the partition, but the second run (after files have been created on the filesystem) fails.
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
changed: [win10]
TASK [win_format] ****************************************************************************************************************************
changed: [win10]
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook t3.yml
PLAY [win10] *********************************************************************************************************************************
TASK [win_partition] *************************************************************************************************************************
ok: [win10]
TASK [win_format] ****************************************************************************************************************************
fatal: [win10]: FAILED! => {"changed": false, "msg": "Force format must be specified to format non-pristine volumes"}
PLAY RECAP ***********************************************************************************************************************************
win10 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/58302
|
https://github.com/ansible/ansible/pull/59819
|
c7662d8b2f0d1fd954b8026442af3ef1497e99d5
|
74a3eec1d9afc3911a6c41efb11cf6f8a3474796
| 2019-06-24T20:06:58Z |
python
| 2019-09-16T02:45:44Z |
test/integration/targets/win_format/tasks/tests.yml
|
---
- win_shell: $AnsiPart = Get-Partition -DriveLetter T; $AnsiVol = Get-Volume -DriveLetter T; "$($AnsiPart.Size),$($AnsiVol.Size)"
register: shell_result
- name: Assert volume size is 0 for pristine volume
assert:
that:
- shell_result.stdout | trim == "2096037888,0"
- name: Get partition access path
win_shell: (Get-Partition -DriveLetter T).AccessPaths[1]
register: shell_partition_result
- name: Try to format using mutually exclusive parameters
win_format:
drive_letter: T
path: "{{ shell_partition_result.stdout | trim }}"
register: format_mutex_result
ignore_errors: True
- assert:
that:
- format_mutex_result is failed
- 'format_mutex_result.msg == "parameters are mutually exclusive: drive_letter, path, label"'
- name: Fully format volume and assign label (check)
win_format:
drive_letter: T
new_label: Formatted
full: True
register: format_result_check
check_mode: True
- win_shell: $AnsiPart = Get-Partition -DriveLetter T; $AnsiVol = Get-Volume -DriveLetter T; "$($AnsiPart.Size),$($AnsiVol.Size),$($AnsiVol.FileSystemLabel)"
register: formatted_value_result_check
- name: Fully format volume and assign label
win_format:
drive_letter: T
new_label: Formatted
full: True
register: format_result
- win_shell: $AnsiPart = Get-Partition -DriveLetter T; $AnsiVol = Get-Volume -DriveLetter T; "$($AnsiPart.Size),$($AnsiVol.Size),$($AnsiVol.FileSystemLabel)"
register: formatted_value_result
- assert:
that:
- format_result_check is changed
- format_result is changed
- formatted_value_result_check.stdout | trim == "2096037888,0,"
- formatted_value_result.stdout | trim == "2096037888,2096033792,Formatted"
- name: Format NTFS volume with integrity streams enabled
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: ntfs
integrity_streams: True
ignore_errors: True
register: ntfs_integrity_streams
- assert:
that:
- ntfs_integrity_streams is failed
- 'ntfs_integrity_streams.msg == "Integrity streams can be enabled only on ReFS volumes. You specified: ntfs"'
- name: Format volume (require force_format for specifying different file system)
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: fat32
ignore_errors: True
register: require_force_format
- assert:
that:
- require_force_format is failed
- 'require_force_format.msg == "Force format must be specified since target file system: fat32 is different from the current file system of the volume: ntfs"'
- name: Format volume (forced) (check)
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: refs
force: True
check_mode: True
ignore_errors: True
register: not_pristine_forced_check
- name: Format volume (forced)
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: refs
force: True
register: not_pristine_forced
- name: Format volume (forced) (idempotence will not work)
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: refs
force: True
register: not_pristine_forced_idem_fails
- name: Format volume (idempotence)
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
file_system: refs
register: not_pristine_forced_idem
- assert:
that:
- not_pristine_forced_check is changed
- not_pristine_forced is changed
- not_pristine_forced_idem_fails is changed
- not_pristine_forced_idem is not changed
- name: Add a file
win_file:
path: T:\path\to\directory
state: directory
register: add_file_to_volume
- name: Format volume with file inside without force
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
register: format_volume_without_force
ignore_errors: True
- name: Format volume with file inside with force
win_format:
path: "{{ shell_partition_result.stdout | trim }}"
force: True
register: format_volume_with_force
- assert:
that:
- add_file_to_volume is changed
- format_volume_without_force is failed
- 'format_volume_without_force.msg == "Force format must be specified to format non-pristine volumes"'
- format_volume_with_force is changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,513 |
hostvars not set with implicit localhost
|
##### SUMMARY
The following
```yaml
- hosts: localhost
connection: local
tasks:
- debug:
msg: '{{ hostvars.values() | first }}'
```
If I run it with an explicitly empty inventory like
```yaml
all:
hosts: {}
```
I get the following (where I see the warning about implicit localhost)
```
$ ./ansible-env/bin/ansible-playbook -i inventory.yaml ./test.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] *******************************************************************
TASK [Gathering Facts] *************************************************************
ok: [localhost]
TASK [debug] ***********************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: No first item, sequence was empty.\n\nThe error appears to be in '/home/iwienand/tmp/test.yaml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
```
However, if I change the inventory to
```yaml
all:
hosts:
localhost:
vars:
ansible_connection: local
```
(or, indeed run it via ```$ ./ansible-env/bin/ansible-playbook -i 'localhost,' ./test.yaml```) then accessing ```hostvars``` works OK.
Ergo, it seems that ```hostvars``` isn't defined when using the implicit ```localhost```? I did not see that explicitly mentioned in https://docs.ansible.com/ansible/latest/inventory/implicit_localhost.html but perhaps that is what "not targetable via any group" means?
I'd be happy for advice if this is unexpected behaviour, or perhaps I could clarify the documentation further.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = None
configured module search path = ['/home/iwienand/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/iwienand/tmp/ansible-env/lib/python3.7/site-packages/ansible
executable location = ./ansible-env/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
|
https://github.com/ansible/ansible/issues/61513
|
https://github.com/ansible/ansible/pull/61956
|
7a36606574168777e1e41708cdc257ef6ad18569
|
b1afb37ac92645c7a2017047fa21f4ce1e2020d8
| 2019-08-29T06:05:47Z |
python
| 2019-09-16T18:00:29Z |
docs/docsite/rst/inventory/implicit_localhost.rst
|
:orphan:
.. _implicit_localhost:
Implicit 'localhost'
====================
When you try to reference a ``localhost`` and you don't have it defined in inventory, Ansible will create an implicit one for you.::
- hosts: all
tasks:
- name: check that i have log file for all hosts on my local machine
stat: path=/var/log/hosts/{{inventory_hostname}}.log
delegate_to: localhost
In a case like this (or ``local_action``) when Ansible needs to contact a 'localhost' but you did not supply one, we create one for you. This host is defined with specific connection variables equivalent to this in an inventory::
...
hosts:
localhost:
vars:
ansible_connection: local
ansible_python_interpreter: "{{ansible_playbook_python}}"
This ensures that the proper connection and Python are used to execute your tasks locally.
You can override the built-in implicit version by creating a ``localhost`` host entry in your inventory. At that point, all implicit behaviors are ignored; the ``localhost`` in inventory is treated just like any other host. Group and host vars will apply, including connection vars, which includes the ``ansible_python_interpreter`` setting. This will also affect ``delegate_to: localhost`` and ``local_action``, the latter being an alias to the former.
.. note::
- This host is not targetable via any group, however it will use vars from ``host_vars`` and from the 'all' group.
- The ``inventory_file`` and ``inventory_dir`` magic variables are not available for the implicit localhost as they are dependent on **each inventory host**.
- This implicit host also gets triggered by using ``127.0.0.1`` or ``::1`` as they are the IPv4 and IPv6 representations of 'localhost'.
- Even though there are many ways to create it, there will only ever be ONE implicit localhost, using the name first used to create it.
- Having ``connection: local`` does NOT trigger an implicit localhost, you are just changing the connection for the ``inventory_hostname``.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,396 |
ios_l3_interfaces can't do round trip (gather_facts->data model->push back to device) b/c of cidr / notation
|
##### SUMMARY
when you gather_facts for ios_l3_interfaces
it will gather the full subnet (e.g. 255.255.255.0 instead of /24)
```
l3_interfaces:
- ipv4:
- address: 192.168.1.101 255.255.255.0
name: loopback0
- ipv4:
- address: 10.1.1.101 255.255.255.0
name: loopback1
```
but if you push this acquired data model back out, it will do this->
```
msg: address format is <ipv4 address>/<mask>, got invalid format 10.10.10.1 255.255.255.0
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ios_l3_interfaces
##### ANSIBLE VERSION
```paste below
ansible 2.9.0.dev0
config file = /home/student1/.ansible.cfg
configured module search path = [u'/home/student1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/student1/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/hosts']
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 60
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
```
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Red Hat Enterprise Linux Server release 7.6 (Maipo)
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: rtr1
gather_facts: false
tasks:
- name: grab info
ios_facts:
gather_subset: min
gather_network_resources: l3_interfaces
register: facts_for_sean
- name: print interface_info
debug:
msg: "{{ansible_network_resources}}"
- name: print facts
debug:
var: facts_for_sean
```
which results in
```
l3_interfaces:
- ipv4:
- address: 192.168.1.101 255.255.255.0
name: loopback0
- ipv4:
- address: 10.1.1.101 255.255.255.0
name: loopback1
- ipv4:
- address: 10.10.10.1 255.255.255.0
ipv6:
- address: fc00::100/64
- address: fc00::101/64
name: loopback100
- ipv4:
- address: dhcp
name: GigabitEthernet1
```
now try to merge those->
```
- ios_l3_interfaces:
config: "{{config}}"
state: merged
```
you will get something like
```
TASK [ios_l3_interfaces] ***********************************************************************************************
fatal: [rtr1]: FAILED! => changed=false
msg: address format is <ipv4 address>/<mask>, got invalid format 10.10.10.1 255.255.255.0
```
##### EXPECTED RESULTS
OK pass
##### ACTUAL RESULTS
failed, does not work, wants `/24` notation
|
https://github.com/ansible/ansible/issues/61396
|
https://github.com/ansible/ansible/pull/61642
|
037401b6e0d5c500774e2a259909e6fa7e3f16e2
|
f9fd1f3626170a4370335648bba056a8d4913242
| 2019-08-27T16:48:01Z |
python
| 2019-09-17T08:10:32Z |
lib/ansible/module_utils/network/ios/utils/utils.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# utils
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.common.utils import is_masklen, to_netmask
def remove_command_from_config_list(interface, cmd, commands):
# To delete the passed config
if interface not in commands:
commands.insert(0, interface)
commands.append('no %s' % cmd)
return commands
def add_command_to_config_list(interface, cmd, commands):
# To set the passed config
if interface not in commands:
commands.insert(0, interface)
commands.append(cmd)
def dict_to_set(sample_dict):
# Generate a set with passed dictionary for comparison
test_dict = dict()
if isinstance(sample_dict, dict):
for k, v in iteritems(sample_dict):
if v is not None:
if isinstance(v, list):
if isinstance(v[0], dict):
li = []
for each in v:
for key, value in iteritems(each):
if isinstance(value, list):
each[key] = tuple(value)
li.append(tuple(iteritems(each)))
v = tuple(li)
else:
v = tuple(v)
elif isinstance(v, dict):
li = []
for key, value in iteritems(v):
if isinstance(value, list):
v[key] = tuple(value)
li.extend(tuple(iteritems(v)))
v = tuple(li)
test_dict.update({k: v})
return_set = set(tuple(iteritems(test_dict)))
else:
return_set = set(sample_dict)
return return_set
def filter_dict_having_none_value(want, have):
# Generate dict with have dict value which is None in want dict
test_dict = dict()
test_key_dict = dict()
name = want.get('name')
if name:
test_dict['name'] = name
diff_ip = False
want_ip = ''
for k, v in iteritems(want):
if isinstance(v, dict):
for key, value in iteritems(v):
if value is None:
dict_val = have.get(k).get(key)
test_key_dict.update({key: dict_val})
test_dict.update({k: test_key_dict})
if isinstance(v, list):
for key, value in iteritems(v[0]):
if value is None:
dict_val = have.get(k).get(key)
test_key_dict.update({key: dict_val})
test_dict.update({k: test_key_dict})
# below conditions checks are added to check if
# secondary IP is configured, if yes then delete
# the already configured IP if want and have IP
# is different else if it's same no need to delete
for each in v:
if each.get('secondary'):
want_ip = each.get('address').split('/')
have_ip = have.get('ipv4')
if len(want_ip) > 1 and have_ip and have_ip[0].get('secondary'):
have_ip = have_ip[0]['address'].split(' ')[0]
if have_ip != want_ip[0]:
diff_ip = True
if each.get('secondary') and diff_ip is True:
test_key_dict.update({'secondary': True})
test_dict.update({'ipv4': test_key_dict})
if v is None:
val = have.get(k)
test_dict.update({k: val})
return test_dict
def remove_duplicate_interface(commands):
# Remove duplicate interface from commands
set_cmd = []
for each in commands:
if 'interface' in each:
if each not in set_cmd:
set_cmd.append(each)
else:
set_cmd.append(each)
return set_cmd
def validate_ipv4(value, module):
if value:
address = value.split('/')
if len(address) != 2:
module.fail_json(msg='address format is <ipv4 address>/<mask>, got invalid format {0}'.format(value))
if not is_masklen(address[1]):
module.fail_json(msg='invalid value for mask: {0}, mask should be in range 0-32'.format(address[1]))
def validate_ipv6(value, module):
if value:
address = value.split('/')
if len(address) != 2:
module.fail_json(msg='address format is <ipv6 address>/<mask>, got invalid format {0}'.format(value))
else:
if not 0 <= int(address[1]) <= 128:
module.fail_json(msg='invalid value for mask: {0}, mask should be in range 0-128'.format(address[1]))
def validate_n_expand_ipv4(module, want):
# Check if input IPV4 is valid IP and expand IPV4 with its subnet mask
ip_addr_want = want.get('address')
validate_ipv4(ip_addr_want, module)
ip = ip_addr_want.split('/')
if len(ip) == 2:
ip_addr_want = '{0} {1}'.format(ip[0], to_netmask(ip[1]))
return ip_addr_want
def normalize_interface(name):
"""Return the normalized interface name
"""
if not name:
return
def _get_number(name):
digits = ''
for char in name:
if char.isdigit() or char in '/.':
digits += char
return digits
if name.lower().startswith('gi'):
if_type = 'GigabitEthernet'
elif name.lower().startswith('te'):
if_type = 'TenGigabitEthernet'
elif name.lower().startswith('fa'):
if_type = 'FastEthernet'
elif name.lower().startswith('fo'):
if_type = 'FortyGigabitEthernet'
elif name.lower().startswith('long'):
if_type = 'LongReachEthernet'
elif name.lower().startswith('et'):
if_type = 'Ethernet'
elif name.lower().startswith('vl'):
if_type = 'Vlan'
elif name.lower().startswith('lo'):
if_type = 'loopback'
elif name.lower().startswith('po'):
if_type = 'Port-channel'
elif name.lower().startswith('nv'):
if_type = 'nve'
elif name.lower().startswith('twe'):
if_type = 'TwentyFiveGigE'
elif name.lower().startswith('hu'):
if_type = 'HundredGigE'
else:
if_type = None
number_list = name.split(' ')
if len(number_list) == 2:
number = number_list[-1].strip()
else:
number = _get_number(name)
if if_type:
proper_interface = if_type + number
else:
proper_interface = name
return proper_interface
def get_interface_type(interface):
"""Gets the type of interface
"""
if interface.upper().startswith('GI'):
return 'GigabitEthernet'
elif interface.upper().startswith('TE'):
return 'TenGigabitEthernet'
elif interface.upper().startswith('FA'):
return 'FastEthernet'
elif interface.upper().startswith('FO'):
return 'FortyGigabitEthernet'
elif interface.upper().startswith('LON'):
return 'LongReachEthernet'
elif interface.upper().startswith('ET'):
return 'Ethernet'
elif interface.upper().startswith('VL'):
return 'Vlan'
elif interface.upper().startswith('LO'):
return 'loopback'
elif interface.upper().startswith('PO'):
return 'Port-channel'
elif interface.upper().startswith('NV'):
return 'nve'
elif interface.upper().startswith('TWE'):
return 'TwentyFiveGigE'
elif interface.upper().startswith('HU'):
return 'HundredGigE'
else:
return 'unknown'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,869 |
eos_system : configure domain_list using platform agnostic module random failure
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
This is to help track a random failure we are seeing in eos testing.
https://object-storage-ca-ymq-1.vexxhost.net/v1/a0b4156a37f9453eb4ec7db5422272df/ansible_79/61779/2a3bc59e39230ad945d42da08a8aad3f91dee027/third-party-check/ansible-test-network-integration-eos-python37/31d1710/controller/ara-report/result/493a2986-7709-41bc-bbe7-5df954edb93a/
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
eos_system
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
zuul.ansible.com
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/zuul/.ansible/tmp/ansible-local-8450zztissx4/ansible-tmp-1567664814.2179358-150200870655143/AnsiballZ_eos_system.py", line 116, in <module>
_ansiballz_main()
File "/home/zuul/.ansible/tmp/ansible-local-8450zztissx4/ansible-tmp-1567664814.2179358-150200870655143/AnsiballZ_eos_system.py", line 108, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/zuul/.ansible/tmp/ansible-local-8450zztissx4/ansible-tmp-1567664814.2179358-150200870655143/AnsiballZ_eos_system.py", line 54, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.eos.eos_system', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib64/python3.7/runpy.py", line 208, in run_module
return _run_code(code, {}, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/modules/network/eos/eos_system.py", line 339, in <module>
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/modules/network/eos/eos_system.py", line 322, in main
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/modules/network/eos/eos_system.py", line 260, in map_config_to_obj
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/module_utils/network/eos/eos.py", line 632, in get_config
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/module_utils/network/eos/eos.py", line 107, in get_connection
File "/tmp/ansible_eos_system_payload_lj5nlld1/ansible_eos_system_payload.zip/ansible/module_utils/connection.py", line 185, in __rpc__
ansible.module_utils.connection.ConnectionError: the JSON object must be str, bytes or bytearray, not list
```
|
https://github.com/ansible/ansible/issues/61869
|
https://github.com/ansible/ansible/pull/62350
|
5cd3be9129c81388c213bcd6b5de7741b9eee8c4
|
84d9b3e58986d70ca64539e816e342a9d7d0b888
| 2019-09-05T16:17:32Z |
python
| 2019-09-17T14:00:19Z |
lib/ansible/plugins/httpapi/eos.py
|
# (c) 2018 Red Hat Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
httpapi: eos
short_description: Use eAPI to run command on eos platform
description:
- This eos plugin provides low level abstraction api's for
sending and receiving CLI commands with eos network devices.
version_added: "2.6"
options:
eos_use_sessions:
type: int
default: 1
description:
- Specifies if sessions should be used on remote host or not
env:
- name: ANSIBLE_EOS_USE_SESSIONS
vars:
- name: ansible_eos_use_sessions
version_added: '2.8'
"""
import json
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.common.utils import to_list
from ansible.plugins.httpapi import HttpApiBase
OPTIONS = {
'format': ['text', 'json'],
'diff_match': ['line', 'strict', 'exact', 'none'],
'diff_replace': ['line', 'block', 'config'],
'output': ['text', 'json']
}
class HttpApi(HttpApiBase):
def __init__(self, *args, **kwargs):
super(HttpApi, self).__init__(*args, **kwargs)
self._device_info = None
self._session_support = None
def supports_sessions(self):
use_session = self.get_option('eos_use_sessions')
try:
use_session = int(use_session)
except ValueError:
pass
if not bool(use_session):
self._session_support = False
else:
if self._session_support:
return self._session_support
response = self.send_request('show configuration sessions')
self._session_support = 'error' not in response
return self._session_support
def send_request(self, data, **message_kwargs):
data = to_list(data)
if self._become:
self.connection.queue_message('vvvv', 'firing event: on_become')
data.insert(0, {"cmd": "enable", "input": self._become_pass})
output = message_kwargs.get('output', 'text')
request = request_builder(data, output)
headers = {'Content-Type': 'application/json-rpc'}
response, response_data = self.connection.send('/command-api', request, headers=headers, method='POST')
try:
response_data = json.loads(to_text(response_data.getvalue()))
except ValueError:
raise ConnectionError('Response was not valid JSON, got {0}'.format(
to_text(response_data.getvalue())
))
results = handle_response(response_data)
if self._become:
results = results[1:]
if len(results) == 1:
results = results[0]
return results
def get_device_info(self):
if self._device_info:
return self._device_info
device_info = {}
device_info['network_os'] = 'eos'
reply = self.send_request('show version | json')
data = json.loads(reply)
device_info['network_os_version'] = data['version']
device_info['network_os_model'] = data['modelName']
reply = self.send_request('show hostname | json')
data = json.loads(reply)
device_info['network_os_hostname'] = data['hostname']
self._device_info = device_info
return self._device_info
def get_device_operations(self):
return {
'supports_diff_replace': True,
'supports_commit': bool(self.supports_sessions()),
'supports_rollback': False,
'supports_defaults': False,
'supports_onbox_diff': bool(self.supports_sessions()),
'supports_commit_comment': False,
'supports_multiline_delimiter': False,
'supports_diff_match': True,
'supports_diff_ignore_lines': True,
'supports_generate_diff': not bool(self.supports_sessions()),
'supports_replace': bool(self.supports_sessions()),
}
def get_capabilities(self):
result = {}
result['rpc'] = []
result['device_info'] = self.get_device_info()
result['device_operations'] = self.get_device_operations()
result.update(OPTIONS)
result['network_api'] = 'eapi'
return json.dumps(result)
def handle_response(response):
if 'error' in response:
error = response['error']
error_text = []
for data in error['data']:
error_text.extend(data.get('errors', []))
error_text = '\n'.join(error_text) or error['message']
raise ConnectionError(error_text, code=error['code'])
results = []
for result in response['result']:
if 'messages' in result:
results.append(result['messages'][0])
elif 'output' in result:
results.append(result['output'].strip())
else:
results.append(json.dumps(result))
return results
def request_builder(commands, output, reqid=None):
params = dict(version=1, cmds=commands, format=output)
return json.dumps(dict(jsonrpc='2.0', id=reqid, method='runCmds', params=params))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,414 |
hcloud_floating_ip_info: test fails (two floating IPs, one expected)
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`hcloud_floating_ip_info` functional test fails with the following error:
```
01:02 ok: [testhost] => {
01:02 "changed": false,
01:02 "hcloud_floating_ip_info": [
01:02 {
01:02 "description": "always-there-floating-ip",
01:02 "home_location": "fsn1",
01:02 "id": "62227",
01:02 "ip": "116.202.3.68",
01:02 "labels": {
01:02 "key": "value"
01:02 },
01:02 "server": "None",
01:02 "type": "ipv4"
01:02 },
01:02 {
01:02 "description": "another-floating-ip",
01:02 "home_location": "nbg1",
01:02 "id": "96926",
01:02 "ip": "94.130.190.147",
01:02 "labels": {},
01:02 "server": "None",
01:02 "type": "ipv4"
01:02 }
01:02 ],
01:02 "invocation": {
01:02 "module_args": {
01:02 "api_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
01:02 "endpoint": "https://api.hetzner.cloud/v1",
01:02 "id": null,
01:02 "label_selector": null
01:02 }
01:02 }
01:02 }
01:02
01:02 TASK [hcloud_floating_ip_info : verify test gather hcloud floating ip infos in check mode] ***
01:02 task path: /root/.ansible/test/tmp/hcloud_floating_ip_info-kl5v09bd-ÅÑŚÌβŁÈ/test/integration/targets/hcloud_floating_ip_info/tasks/main.yml:7
01:03 fatal: [testhost]: FAILED! => {
01:03 "assertion": "hcloud_floating_ips.hcloud_floating_ip_info| list | count == 1",
01:03 "changed": false,
01:03 "evaluated_to": false,
01:03 "msg": "Assertion failed"
01:03 }
```
See:
- https://app.shippable.com/github/ansible/ansible/runs/143452/145/console
- https://app.shippable.com/github/ansible/ansible/runs/143452/146/console
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
hcloud_floating_ip_info
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/62414
|
https://github.com/ansible/ansible/pull/62494
|
97b15e9f0c4ddf419ca597e1b6769c29053bc06f
|
d5449eed11cc65035c0b127324f3bfa385c5c652
| 2019-09-17T13:09:54Z |
python
| 2019-09-18T11:55:05Z |
test/integration/targets/hcloud_floating_ip_info/aliases
|
cloud/hcloud
shippable/hcloud/group1
disabled # See: https://github.com/ansible/ansible/issues/62414
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,530 |
equals error message
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
fix if equals error code if command not found to output the relevant error message
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
check_point
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
run any cp module on a checkpoint machine which doesn't contain the "equals" command
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Relevant hotfix is not installed on Check Point server. See sk114661 on Check Point Support Center.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
does not fail
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62530
|
https://github.com/ansible/ansible/pull/62529
|
2232232b4525336ebebcb4cee683095a48ac2775
|
55f285a384e5b787e93927bce568363bd91e47d8
| 2019-09-18T15:47:22Z |
python
| 2019-09-18T16:00:42Z |
lib/ansible/module_utils/network/checkpoint/checkpoint.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2018 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
from __future__ import (absolute_import, division, print_function)
import time
from ansible.module_utils.connection import Connection
checkpoint_argument_spec_for_objects = dict(
auto_publish_session=dict(type='bool'),
wait_for_task=dict(type='bool', default=True),
state=dict(type='str', choices=['present', 'absent'], default='present'),
version=dict(type='str')
)
checkpoint_argument_spec_for_facts = dict(
version=dict(type='str')
)
checkpoint_argument_spec_for_commands = dict(
wait_for_task=dict(type='bool', default=True),
version=dict(type='str')
)
delete_params = ['name', 'uid', 'layer', 'exception-group-name', 'layer', 'rule-name']
# send the request to checkpoint
def send_request(connection, version, url, payload=None):
code, response = connection.send_request('/web_api/' + version + url, payload)
return code, response
# get the payload from the user parameters
def is_checkpoint_param(parameter):
if parameter == 'auto_publish_session' or \
parameter == 'state' or \
parameter == 'wait_for_task' or \
parameter == 'version':
return False
return True
# build the payload from the parameters which has value (not None), and they are parameter of checkpoint API as well
def get_payload_from_parameters(params):
payload = {}
for parameter in params:
parameter_value = params[parameter]
if parameter_value and is_checkpoint_param(parameter):
if isinstance(parameter_value, dict):
payload[parameter.replace("_", "-")] = get_payload_from_parameters(parameter_value)
elif isinstance(parameter_value, list) and len(parameter_value) != 0 and isinstance(parameter_value[0], dict):
payload_list = []
for element_dict in parameter_value:
payload_list.append(get_payload_from_parameters(element_dict))
payload[parameter.replace("_", "-")] = payload_list
else:
payload[parameter.replace("_", "-")] = parameter_value
return payload
# wait for task
def wait_for_task(module, version, connection, task_id):
task_id_payload = {'task-id': task_id}
task_complete = False
current_iteration = 0
max_num_iterations = 300
# As long as there is a task in progress
while not task_complete and current_iteration < max_num_iterations:
current_iteration += 1
# Check the status of the task
code, response = send_request(connection, version, 'show-task', task_id_payload)
attempts_counter = 0
while code != 200:
if attempts_counter < 5:
attempts_counter += 1
time.sleep(2)
code, response = send_request(connection, version, 'show-task', task_id_payload)
else:
response['message'] = "ERROR: Failed to handle asynchronous tasks as synchronous, tasks result is" \
" undefined.\n" + response['message']
module.fail_json(msg=response)
# Count the number of tasks that are not in-progress
completed_tasks = 0
for task in response['tasks']:
if task['status'] == 'failed':
module.fail_json(msg='Task {0} with task id {1} failed. Look at the logs for more details'
.format(task['task-name'], task['task-id']))
if task['status'] == 'in progress':
break
completed_tasks += 1
# Are we done? check if all tasks are completed
if completed_tasks == len(response["tasks"]):
task_complete = True
else:
time.sleep(2) # Wait for two seconds
if not task_complete:
module.fail_json(msg="ERROR: Timeout.\nTask-id: {0}.".format(task_id_payload['task-id']))
# handle publish command, and wait for it to end if the user asked so
def handle_publish(module, connection, version):
if module.params['auto_publish_session']:
publish_code, publish_response = send_request(connection, version, 'publish')
if publish_code != 200:
module.fail_json(msg=publish_response)
if module.params['wait_for_task']:
wait_for_task(module, version, connection, publish_response['task-id'])
# handle a command
def api_command(module, command):
payload = get_payload_from_parameters(module.params)
connection = Connection(module._socket_path)
# if user insert a specific version, we add it to the url
version = ('v' + module.params['version'] + '/') if module.params.get('version') else ''
code, response = send_request(connection, version, command, payload)
result = {'changed': True}
if code == 200:
if module.params['wait_for_task']:
if 'task-id' in response:
wait_for_task(module, version, connection, response['task-id'])
elif 'tasks' in response:
for task_id in response['tasks']:
wait_for_task(module, version, connection, task_id)
result[command] = response
else:
module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response))
return result
# handle api call facts
def api_call_facts(module, api_call_object, api_call_object_plural_version):
payload = get_payload_from_parameters(module.params)
connection = Connection(module._socket_path)
# if user insert a specific version, we add it to the url
version = ('v' + module.params['version'] + '/') if module.params['version'] else ''
# if there is neither name nor uid, the API command will be in plural version (e.g. show-hosts instead of show-host)
if payload.get("name") is None and payload.get("uid") is None:
api_call_object = api_call_object_plural_version
code, response = send_request(connection, version, 'show-' + api_call_object, payload)
if code != 200:
module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response))
result = {api_call_object: response}
return result
# handle api call
def api_call(module, api_call_object):
payload = get_payload_from_parameters(module.params)
connection = Connection(module._socket_path)
result = {'changed': False}
if module.check_mode:
return result
# if user insert a specific version, we add it to the url
version = ('v' + module.params['version'] + '/') if module.params.get('version') else ''
payload_for_equals = {'type': api_call_object, 'params': payload}
equals_code, equals_response = send_request(connection, version, 'equals', payload_for_equals)
result['checkpoint_session_uid'] = connection.get_session_uid()
# if code is 400 (bad request) or 500 (internal error) - fail
if equals_code == 400 or equals_code == 500:
module.fail_json(msg=equals_response)
if equals_code == 404 and equals_response['code'] == 'generic_err_command_not_found':
module.fail_json(msg='Relevant hotfix is not installed on Check Point server. See sk114661 on Check Point Support Center.')
if module.params['state'] == 'present':
if equals_code == 200:
if not equals_response['equals']:
code, response = send_request(connection, version, 'set-' + api_call_object, payload)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
result[api_call_object] = response
else:
# objects are equals and there is no need for set request
pass
elif equals_code == 404:
code, response = send_request(connection, version, 'add-' + api_call_object, payload)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
result[api_call_object] = response
elif module.params['state'] == 'absent':
if equals_code == 200:
payload_for_delete = get_copy_payload_with_some_params(payload, delete_params)
code, response = send_request(connection, version, 'delete-' + api_call_object, payload_for_delete)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
elif equals_code == 404:
# no need to delete because object dose not exist
pass
return result
# get the position in integer format
def get_number_from_position(payload, connection, version):
if 'position' in payload:
position = payload['position']
else:
return None
# This code relevant if we will decide to support 'top' and 'bottom' in position
# position_number = None
# # if position is not int, convert it to int. There are several cases: "top"
# if position == 'top':
# position_number = 1
# elif position == 'bottom':
# payload_for_show_access_rulebase = {'name': payload['layer'], 'limit': 0}
# code, response = send_request(connection, version, 'show-access-rulebase', payload_for_show_access_rulebase)
# position_number = response['total']
# elif isinstance(position, str):
# # here position is a number in format str (e.g. "5" and not 5)
# position_number = int(position)
# else:
# # here position suppose to be int
# position_number = position
#
# return position_number
return int(position)
# is the param position (if the user inserted it) equals between the object and the user input
def is_equals_with_position_param(payload, connection, version, api_call_object):
position_number = get_number_from_position(payload, connection, version)
# if there is no position param, then it's equals in vacuous truth
if position_number is None:
return True
payload_for_show_access_rulebase = {'name': payload['layer'], 'offset': position_number - 1, 'limit': 1}
rulebase_command = 'show-' + api_call_object.split('-')[0] + '-rulebase'
# if it's threat-exception, we change a little the payload and the command
if api_call_object == 'threat-exception':
payload_for_show_access_rulebase['rule-name'] = payload['rule-name']
rulebase_command = 'show-threat-rule-exception-rulebase'
code, response = send_request(connection, version, rulebase_command, payload_for_show_access_rulebase)
# if true, it means there is no rule in the position that the user inserted, so I return false, and when we will try to set
# the rule, the API server will get throw relevant error
if response['total'] < position_number:
return False
rule = response['rulebase'][0]
while 'rulebase' in rule:
rule = rule['rulebase'][0]
# if the names of the exist rule and the user input rule are equals, then it's means that their positions are equals so I
# return True. and there is no way that there is another rule with this name cause otherwise the 'equals' command would fail
if rule['name'] == payload['name']:
return True
else:
return False
# get copy of the payload without some of the params
def get_copy_payload_without_some_params(payload, params_to_remove):
copy_payload = dict(payload)
for param in params_to_remove:
if param in copy_payload:
del copy_payload[param]
return copy_payload
# get copy of the payload with only some of the params
def get_copy_payload_with_some_params(payload, params_to_insert):
copy_payload = {}
for param in params_to_insert:
if param in payload:
copy_payload[param] = payload[param]
return copy_payload
# is equals with all the params including action and position
def is_equals_with_all_params(payload, connection, version, api_call_object, is_access_rule):
if is_access_rule and 'action' in payload:
payload_for_show = get_copy_payload_with_some_params(payload, ['name', 'uid', 'layer'])
code, response = send_request(connection, version, 'show-' + api_call_object, payload_for_show)
exist_action = response['action']['name']
if exist_action != payload['action']:
return False
if not is_equals_with_position_param(payload, connection, version, api_call_object):
return False
return True
# handle api call for rule
def api_call_for_rule(module, api_call_object):
is_access_rule = True if 'access' in api_call_object else False
payload = get_payload_from_parameters(module.params)
connection = Connection(module._socket_path)
result = {'changed': False}
if module.check_mode:
return result
# if user insert a specific version, we add it to the url
version = ('v' + module.params['version'] + '/') if module.params.get('version') else ''
if is_access_rule:
copy_payload_without_some_params = get_copy_payload_without_some_params(payload, ['action', 'position'])
else:
copy_payload_without_some_params = get_copy_payload_without_some_params(payload, ['position'])
payload_for_equals = {'type': api_call_object, 'params': copy_payload_without_some_params}
equals_code, equals_response = send_request(connection, version, 'equals', payload_for_equals)
result['checkpoint_session_uid'] = connection.get_session_uid()
# if code is 400 (bad request) or 500 (internal error) - fail
if equals_code == 400 or equals_code == 500:
module.fail_json(msg=equals_response)
if module.params['state'] == 'present':
if equals_code == 200:
if equals_response['equals']:
if not is_equals_with_all_params(payload, connection, version, api_call_object, is_access_rule):
equals_response['equals'] = False
if not equals_response['equals']:
# if user insert param 'position' and needed to use the 'set' command, change the param name to 'new-position'
if 'position' in payload:
payload['new-position'] = payload['position']
del payload['position']
code, response = send_request(connection, version, 'set-' + api_call_object, payload)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
result[api_call_object] = response
else:
# objects are equals and there is no need for set request
pass
elif equals_code == 404:
code, response = send_request(connection, version, 'add-' + api_call_object, payload)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
result[api_call_object] = response
elif module.params['state'] == 'absent':
if equals_code == 200:
payload_for_delete = get_copy_payload_with_some_params(payload, delete_params)
code, response = send_request(connection, version, 'delete-' + api_call_object, payload_for_delete)
if code != 200:
module.fail_json(msg=response)
handle_publish(module, connection, version)
result['changed'] = True
elif equals_code == 404:
# no need to delete because object dose not exist
pass
return result
# handle api call facts for rule
def api_call_facts_for_rule(module, api_call_object, api_call_object_plural_version):
payload = get_payload_from_parameters(module.params)
connection = Connection(module._socket_path)
# if user insert a specific version, we add it to the url
version = ('v' + module.params['version'] + '/') if module.params['version'] else ''
# if there is neither name nor uid, the API command will be in plural version (e.g. show-hosts instead of show-host)
if payload.get("layer") is None:
api_call_object = api_call_object_plural_version
code, response = send_request(connection, version, 'show-' + api_call_object, payload)
if code != 200:
module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response))
result = {api_call_object: response}
return result
# The code from here till EOF will be deprecated when Rikis' modules will be deprecated
checkpoint_argument_spec = dict(auto_publish_session=dict(type='bool', default=True),
policy_package=dict(type='str', default='standard'),
auto_install_policy=dict(type='bool', default=True),
targets=dict(type='list')
)
def publish(connection, uid=None):
payload = None
if uid:
payload = {'uid': uid}
connection.send_request('/web_api/publish', payload)
def discard(connection, uid=None):
payload = None
if uid:
payload = {'uid': uid}
connection.send_request('/web_api/discard', payload)
def install_policy(connection, policy_package, targets):
payload = {'policy-package': policy_package,
'targets': targets}
connection.send_request('/web_api/install-policy', payload)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,367 |
Deprecated variables do not point to the same data
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`ansible_become_pass` and `ansible_become_password` do the same things but if you set one variable somewhere in your files, and then the other variable somewhere else, it is not overriden.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
variables
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.7.5
2.8.5
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Try to set `ansible_become_pass` and `ansible_become_password` and debug them. They are hold different values while they are supposed to do the same thing. It probably applies to tons of similar variables.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should use the same value as it's an "alias" variable and since you change the variable names regularly
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`ansible_become_pass` and `ansible_become_password` do the same things but if you set one variable somewhere in your files, and then the other variable somewhere else, it is not overriden.
I ended up wasting 4 days of incomprehension... asking for support and wasting other people's time too
|
https://github.com/ansible/ansible/issues/62367
|
https://github.com/ansible/ansible/pull/62462
|
a4a216640fc7efe436f3f09808500b52ff6a63cd
|
d16ee65ecd3b7abfddaf7d7b6dc8d0da36093cbe
| 2019-09-16T19:55:02Z |
python
| 2019-09-18T19:53:16Z |
docs/docsite/rst/user_guide/become.rst
|
.. _become:
**********************************
Understanding Privilege Escalation
**********************************
Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
.. contents:: Topics
Become
======
Ansible allows you to 'become' another user, different from the user that logged into the machine (remote user). This is done using existing privilege escalation tools such as `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`, `runas`, `machinectl` and others.
.. note:: Prior to version 1.9, Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user and execute tasks and create resources with the second user's permissions. As of Ansible version 1.9, `become` supersedes the old sudo/su, while still being backwards compatible. This new implementation also makes it easier to add other privilege escalation tools, including `pbrun` (Powerbroker), `pfexec`, `dzdo` (Centrify), and others.
.. note:: Become vars and directives are independent. For example, setting ``become_user`` does not set ``become``.
Directives
==========
These can be set from play to task level, but are overridden by connection variables as they can be host specific.
become
set to ``yes`` to activate privilege escalation.
become_user
set to user with desired privileges — the user you `become`, NOT the user you login as. Does NOT imply ``become: yes``, to allow it to be set at host level.
become_method
(at play or task level) overrides the default method set in ansible.cfg, set to use any of the :ref:`become_plugins`.
become_flags
(at play or task level) permit the use of specific flags for the tasks or role. One common use is to change the user to nobody when the shell is set to no login. Added in Ansible 2.2.
For example, to manage a system service (which requires ``root`` privileges) when connected as a non-``root`` user (this takes advantage of the fact that the default value of ``become_user`` is ``root``)::
- name: Ensure the httpd service is running
service:
name: httpd
state: started
become: yes
To run a command as the ``apache`` user::
- name: Run a command as the apache user
command: somecommand
become: yes
become_user: apache
To do something as the ``nobody`` user when the shell is nologin::
- name: Run a command as nobody
command: somecommand
become: yes
become_method: su
become_user: nobody
become_flags: '-s /bin/sh'
Connection variables
--------------------
Each allows you to set an option per group and/or host, these are normally defined in inventory but can be used as normal variables.
ansible_become
equivalent of the become directive, decides if privilege escalation is used or not.
ansible_become_method
which privilege escalation method should be used
ansible_become_user
set the user you become through privilege escalation; does not imply ``ansible_become: yes``
ansible_become_password
set the privilege escalation password. See :ref:`playbooks_vault` for details on how to avoid having secrets in plain text
For example, if you want to run all tasks as ``root`` on a server named ``webserver``, but you can only connect as the ``manager`` user, you could use an inventory entry like this::
webserver ansible_user=manager ansible_become=yes
Command line options
--------------------
--ask-become-pass, -K
ask for privilege escalation password; does not imply become will be used. Note that this password will be used for all hosts.
--become, -b
run operations with become (no password implied)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec | doas | dzdo | ksu | runas | machinectl ]
--become-user=BECOME_USER
run operations as this user (default=root), does not imply --become/-b
For those from Pre 1.9 , sudo and su still work!
------------------------------------------------
For those using old playbooks will not need to be changed, even though they are deprecated, sudo and su directives, variables and options
will continue to work. It is recommended to move to become as they may be retired at one point.
You cannot mix directives on the same object (become and sudo) though, Ansible will complain if you try to.
Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the new ones.
Limitations
-----------
Although privilege escalation is mostly intuitive, there are a few limitations
on how it works. Users should be aware of these to avoid surprises.
Becoming an Unprivileged User
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ansible 2.0.x and below has a limitation with regards to becoming an
unprivileged user that can be a security risk if users are not aware of it.
Ansible modules are executed on the remote machine by first substituting the
parameters into the module file, then copying the file to the remote machine,
and finally executing it there.
Everything is fine if the module file is executed without using ``become``,
when the ``become_user`` is root, or when the connection to the remote machine
is made as root. In these cases the module file is created with permissions
that only allow reading by the user and root.
The problem occurs when the ``become_user`` is an unprivileged user. Ansible
2.0.x and below make the module file world readable in this case, as the module
file is written as the user that Ansible connects as, but the file needs to
be readable by the user Ansible is set to ``become``.
.. note:: In Ansible 2.1, this window is further narrowed: If the connection
is made as a privileged user (root), then Ansible 2.1 and above will use
chown to set the file's owner to the unprivileged user being switched to.
This means both the user making the connection and the user being switched
to via ``become`` must be unprivileged in order to trigger this problem.
If any of the parameters passed to the module are sensitive in nature, then
those pieces of data are located in a world readable module file for the
duration of the Ansible module execution. Once the module is done executing,
Ansible will delete the temporary file. If you trust the client machines then
there's no problem here. If you do not trust the client machines then this is
a potential danger.
Ways to resolve this include:
* Use `pipelining`. When pipelining is enabled, Ansible doesn't save the
module to a temporary file on the client. Instead it pipes the module to
the remote python interpreter's stdin. Pipelining does not work for
python modules involving file transfer (for example: :ref:`copy <copy_module>`,
:ref:`fetch <fetch_module>`, :ref:`template <template_module>`), or for non-python modules.
* (Available in Ansible 2.1) Install POSIX.1e filesystem acl support on the
managed host. If the temporary directory on the remote host is mounted with
POSIX acls enabled and the :command:`setfacl` tool is in the remote ``PATH``
then Ansible will use POSIX acls to share the module file with the second
unprivileged user instead of having to make the file readable by everyone.
* Don't perform an action on the remote machine by becoming an unprivileged
user. Temporary files are protected by UNIX file permissions when you
``become`` root or do not use ``become``. In Ansible 2.1 and above, UNIX
file permissions are also secure if you make the connection to the managed
machine as root and then use ``become`` to an unprivileged account.
.. warning:: Although the Solaris ZFS filesystem has filesystem ACLs, the ACLs
are not POSIX.1e filesystem acls (they are NFSv4 ACLs instead). Ansible
cannot use these ACLs to manage its temp file permissions so you may have
to resort to ``allow_world_readable_tmpfiles`` if the remote machines use ZFS.
.. versionchanged:: 2.1
In addition to the additional means of doing this securely, Ansible 2.1 also
makes it harder to unknowingly do this insecurely. Whereas in Ansible 2.0.x
and below, Ansible will silently allow the insecure behaviour if it was unable
to find another way to share the files with the unprivileged user, in Ansible
2.1 and above Ansible defaults to issuing an error if it can't do this
securely. If you can't make any of the changes above to resolve the problem,
and you decide that the machine you're running on is secure enough for the
modules you want to run there to be world readable, you can turn on
``allow_world_readable_tmpfiles`` in the :file:`ansible.cfg` file. Setting
``allow_world_readable_tmpfiles`` will change this from an error into
a warning and allow the task to run as it did prior to 2.1.
Connection Plugin Support
^^^^^^^^^^^^^^^^^^^^^^^^^
Privilege escalation methods must also be supported by the connection plugin
used. Most connection plugins will warn if they do not support become. Some
will just ignore it as they always run as root (jail, chroot, etc).
Only one method may be enabled per host
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
you need to have privileges to run the command as that user in sudo or be able
to su directly to it (the same for pbrun, pfexec or other supported methods).
Can't limit escalation to certain commands
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Privilege escalation permissions have to be general. Ansible does not always
use a specific command to do something but runs modules (code) from
a temporary file name which changes every time. If you have '/sbin/service'
or '/bin/chmod' as the allowed commands this will fail with ansible as those
paths won't match with the temporary file that ansible creates to run the
module.
Environment variables populated by pam_systemd
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For most Linux distributions using ``systemd`` as their init, the default
methods used by ``become`` do not open a new "session", in the sense of
systemd. Because the ``pam_systemd`` module will not fully initialize a new
session, you might have surprises compared to a normal session opened through
ssh: some environment variables set by ``pam_systemd``, most notably
``XDG_RUNTIME_DIR``, are not populated for the new user and instead inherited
or just emptied.
This might cause trouble when trying to invoke systemd commands that depend on
``XDG_RUNTIME_DIR`` to access the bus:
.. code-block:: console
$ echo $XDG_RUNTIME_DIR
$ systemctl --user status
Failed to connect to bus: Permission denied
To force ``become`` to open a new systemd session that goes through
``pam_systemd``, you can use ``become_method: machinectl``.
For more information, see `this systemd issue
<https://github.com/systemd/systemd/issues/825#issuecomment-127917622>`_.
.. _become_network:
Become and Networks
===================
As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all :ref:`Ansible-maintained platforms<network_supported>` that support ``enable`` mode: ``eos``, ``ios``, and ``nxos``. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
You must set the connection type to either ``connection: network_cli`` or ``connection: httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.
You can use escalated privileges on only the specific tasks that need them, on an entire play, or on all plays. Adding ``become: yes`` and ``become_method: enable`` instructs Ansible to enter ``enable`` mode before executing the task, play, or playbook where those parameters are set.
If you see this error message, the task that generated it requires ``enable`` mode to succeed:
.. code-block:: console
Invalid input (privileged mode required)
To set ``enable`` mode for a specific task, add ``become`` at the task level:
.. code-block:: yaml
- name: Gather facts (eos)
eos_facts:
gather_subset:
- "!hardware"
become: yes
become_method: enable
To set enable mode for all tasks in a single play, add ``become`` at the play level:
.. code-block:: yaml
- hosts: eos-switches
become: yes
become_method: enable
tasks:
- name: Gather facts (eos)
eos_facts:
gather_subset:
- "!hardware"
Setting enable mode for all tasks
---------------------------------
Often you wish for all tasks in all plays to run using privilege mode, that is best achieved by using ``group_vars``:
**group_vars/eos.yml**
.. code-block:: yaml
ansible_connection: network_cli
ansible_network_os: eos
ansible_user: myuser
ansible_become: yes
ansible_become_method: enable
Passwords for enable mode
^^^^^^^^^^^^^^^^^^^^^^^^^
If you need a password to enter ``enable`` mode, you can specify it in one of two ways:
* providing the :option:`--ask-become-pass <ansible-playbook --ask-become-pass>` command line option
* setting the ``ansible_become_password`` connection variable
.. warning::
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see :ref:`vault`.
authorize and auth_pass
-----------------------
Ansible still supports ``enable`` mode with ``connection: local`` for legacy playbooks. To enter ``enable`` mode with ``connection: local``, use the module options ``authorize`` and ``auth_pass``:
.. code-block:: yaml
- hosts: eos-switches
ansible_connection: local
tasks:
- name: Gather facts (eos)
eos_facts:
gather_subset:
- "!hardware"
provider:
authorize: yes
auth_pass: " {{ secret_auth_pass }}"
We recommend updating your playbooks to use ``become`` for network-device ``enable`` mode consistently. The use of ``authorize`` and of ``provider`` dictionaries will be deprecated in future. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.
.. _become_windows:
Become and Windows
==================
Since Ansible 2.3, ``become`` can be used on Windows hosts through the
``runas`` method. Become on Windows uses the same inventory setup and
invocation arguments as ``become`` on a non-Windows host, so the setup and
variable names are the same as what is defined in this document.
While ``become`` can be used to assume the identity of another user, there are other uses for
it with Windows hosts. One important use is to bypass some of the
limitations that are imposed when running on WinRM, such as constrained network
delegation or accessing forbidden system calls like the WUA API. You can use
``become`` with the same user as ``ansible_user`` to bypass these limitations
and run commands that are not normally accessible in a WinRM session.
Administrative Rights
---------------------
Many tasks in Windows require administrative privileges to complete. When using
the ``runas`` become method, Ansible will attempt to run the module with the
full privileges that are available to the remote user. If it fails to elevate
the user token, it will continue to use the limited token during execution.
A user must have the ``SeDebugPrivilege`` to run a become process with elevated
privileges. This privilege is assigned to Administrators by default. If the
debug privilege is not available, the become process will run with a limited
set of privileges and groups.
To determine the type of token that Ansible was able to get, run the following
task::
- win_whoami:
become: yes
The output will look something similar to the below:
.. code-block:: ansible-output
ok: [windows] => {
"account": {
"account_name": "vagrant-domain",
"domain_name": "DOMAIN",
"sid": "S-1-5-21-3088887838-4058132883-1884671576-1105",
"type": "User"
},
"authentication_package": "Kerberos",
"changed": false,
"dns_domain_name": "DOMAIN.LOCAL",
"groups": [
{
"account_name": "Administrators",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled",
"Owner"
],
"domain_name": "BUILTIN",
"sid": "S-1-5-32-544",
"type": "Alias"
},
{
"account_name": "INTERACTIVE",
"attributes": [
"Mandatory",
"Enabled by default",
"Enabled"
],
"domain_name": "NT AUTHORITY",
"sid": "S-1-5-4",
"type": "WellKnownGroup"
},
],
"impersonation_level": "SecurityAnonymous",
"label": {
"account_name": "High Mandatory Level",
"domain_name": "Mandatory Label",
"sid": "S-1-16-12288",
"type": "Label"
},
"login_domain": "DOMAIN",
"login_time": "2018-11-18T20:35:01.9696884+00:00",
"logon_id": 114196830,
"logon_server": "DC01",
"logon_type": "Interactive",
"privileges": {
"SeBackupPrivilege": "disabled",
"SeChangeNotifyPrivilege": "enabled-by-default",
"SeCreateGlobalPrivilege": "enabled-by-default",
"SeCreatePagefilePrivilege": "disabled",
"SeCreateSymbolicLinkPrivilege": "disabled",
"SeDebugPrivilege": "enabled",
"SeDelegateSessionUserImpersonatePrivilege": "disabled",
"SeImpersonatePrivilege": "enabled-by-default",
"SeIncreaseBasePriorityPrivilege": "disabled",
"SeIncreaseQuotaPrivilege": "disabled",
"SeIncreaseWorkingSetPrivilege": "disabled",
"SeLoadDriverPrivilege": "disabled",
"SeManageVolumePrivilege": "disabled",
"SeProfileSingleProcessPrivilege": "disabled",
"SeRemoteShutdownPrivilege": "disabled",
"SeRestorePrivilege": "disabled",
"SeSecurityPrivilege": "disabled",
"SeShutdownPrivilege": "disabled",
"SeSystemEnvironmentPrivilege": "disabled",
"SeSystemProfilePrivilege": "disabled",
"SeSystemtimePrivilege": "disabled",
"SeTakeOwnershipPrivilege": "disabled",
"SeTimeZonePrivilege": "disabled",
"SeUndockPrivilege": "disabled"
},
"rights": [
"SeNetworkLogonRight",
"SeBatchLogonRight",
"SeInteractiveLogonRight",
"SeRemoteInteractiveLogonRight"
],
"token_type": "TokenPrimary",
"upn": "[email protected]",
"user_flags": []
}
Under the ``label`` key, the ``account_name`` entry determines whether the user
has Administrative rights. Here are the labels that can be returned and what
they represent:
* ``Medium``: Ansible failed to get an elevated token and ran under a limited
token. Only a subset of the privileges assigned to user are available during
the module execution and the user does not have administrative rights.
* ``High``: An elevated token was used and all the privileges assigned to the
user are available during the module execution.
* ``System``: The ``NT AUTHORITY\System`` account is used and has the highest
level of privileges available.
The output will also show the list of privileges that have been granted to the
user. When the privilege value is ``disabled``, the privilege is assigned to
the logon token but has not been enabled. In most scenarios these privileges
are automatically enabled when required.
If running on a version of Ansible that is older than 2.5 or the normal
``runas`` escalation process fails, an elevated token can be retrieved by:
* Set the ``become_user`` to ``System`` which has full control over the
operating system.
* Grant ``SeTcbPrivilege`` to the user Ansible connects with on
WinRM. ``SeTcbPrivilege`` is a high-level privilege that grants
full control over the operating system. No user is given this privilege by
default, and care should be taken if you grant this privilege to a user or group.
For more information on this privilege, please see
`Act as part of the operating system <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn221957(v=ws.11)>`_.
You can use the below task to set this privilege on a Windows host::
- name: grant the ansible user the SeTcbPrivilege right
win_user_right:
name: SeTcbPrivilege
users: '{{ansible_user}}'
action: add
* Turn UAC off on the host and reboot before trying to become the user. UAC is
a security protocol that is designed to run accounts with the
``least privilege`` principle. You can turn UAC off by running the following
tasks::
- name: turn UAC off
win_regedit:
path: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system
name: EnableLUA
data: 0
type: dword
state: present
register: uac_result
- name: reboot after disabling UAC
win_reboot:
when: uac_result is changed
.. Note:: Granting the ``SeTcbPrivilege`` or turning UAC off can cause Windows
security vulnerabilities and care should be given if these steps are taken.
Local Service Accounts
----------------------
Prior to Ansible version 2.5, ``become`` only worked with a local or domain
user account. Local service accounts like ``System`` or ``NetworkService``
could not be used as ``become_user`` in these older versions. This restriction
has been lifted since the 2.5 release of Ansible. The three service accounts
that can be set under ``become_user`` are:
* System
* NetworkService
* LocalService
Because local service accounts do not have passwords, the
``ansible_become_password`` parameter is not required and is ignored if
specified.
Become without setting a Password
---------------------------------
As of Ansible 2.8, ``become`` can be used to become a local or domain account
without requiring a password for that account. For this method to work, the
following requirements must be met:
* The connection user has the ``SeDebugPrivilege`` privilege assigned
* The connection user is part of the ``BUILTIN\Administrators`` group
* The ``become_user`` has either the ``SeBatchLogonRight`` or ``SeNetworkLogonRight`` user right
Using become without a password is achieved in one of two different methods:
* Duplicating an existing logon session's token if the account is already logged on
* Using S4U to generate a logon token that is valid on the remote host only
In the first scenario, the become process is spawned from another logon of that
user account. This could be an existing RDP logon, console logon, but this is
not guaranteed to occur all the time. This is similar to the
``Run only when user is logged on`` option for a Scheduled Task.
In the case where another logon of the become account does not exist, S4U is
used to create a new logon and run the module through that. This is similar to
the ``Run whether user is logged on or not`` with the ``Do not store password``
option for a Scheduled Task. In this scenario, the become process will not be
able to access any network resources like a normal WinRM process.
To make a distinction between using become with no password and becoming an
account that has no password make sure to keep ``ansible_become_password`` as
undefined or set ``ansible_become_password:``.
.. Note:: Because there are no guarantees an existing token will exist for a
user when Ansible runs, there's a high change the become process will only
have access to local resources. Use become with a password if the task needs
to access network resources
Accounts without a Password
---------------------------
.. Warning:: As a general security best practice, you should avoid allowing accounts without passwords.
Ansible can be used to become an account that does not have a password (like the
``Guest`` account). To become an account without a password, set up the
variables like normal but set ``ansible_become_password: ''``.
Before become can work on an account like this, the local policy
`Accounts: Limit local account use of blank passwords to console logon only <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852174(v=ws.11)>`_
must be disabled. This can either be done through a Group Policy Object (GPO)
or with this Ansible task:
.. code-block:: yaml
- name: allow blank password on become
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\Lsa
name: LimitBlankPasswordUse
data: 0
type: dword
state: present
.. Note:: This is only for accounts that do not have a password. You still need
to set the account's password under ``ansible_become_password`` if the
become_user has a password.
Become Flags
------------
Ansible 2.5 adds the ``become_flags`` parameter to the ``runas`` become method.
This parameter can be set using the ``become_flags`` task directive or set in
Ansible's configuration using ``ansible_become_flags``. The two valid values
that are initially supported for this parameter are ``logon_type`` and
``logon_flags``.
.. Note:: These flags should only be set when becoming a normal user account, not a local service account like LocalSystem.
The key ``logon_type`` sets the type of logon operation to perform. The value
can be set to one of the following:
* ``interactive``: The default logon type. The process will be run under a
context that is the same as when running a process locally. This bypasses all
WinRM restrictions and is the recommended method to use.
* ``batch``: Runs the process under a batch context that is similar to a
scheduled task with a password set. This should bypass most WinRM
restrictions and is useful if the ``become_user`` is not allowed to log on
interactively.
* ``new_credentials``: Runs under the same credentials as the calling user, but
outbound connections are run under the context of the ``become_user`` and
``become_password``, similar to ``runas.exe /netonly``. The ``logon_flags``
flag should also be set to ``netcredentials_only``. Use this flag if
the process needs to access a network resource (like an SMB share) using a
different set of credentials.
* ``network``: Runs the process under a network context without any cached
credentials. This results in the same type of logon session as running a
normal WinRM process without credential delegation, and operates under the same
restrictions.
* ``network_cleartext``: Like the ``network`` logon type, but instead caches
the credentials so it can access network resources. This is the same type of
logon session as running a normal WinRM process with credential delegation.
For more information, see
`dwLogonType <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-logonusera>`_.
The ``logon_flags`` key specifies how Windows will log the user on when creating
the new process. The value can be set to none or multiple of the following:
* ``with_profile``: The default logon flag set. The process will load the
user's profile in the ``HKEY_USERS`` registry key to ``HKEY_CURRENT_USER``.
* ``netcredentials_only``: The process will use the same token as the caller
but will use the ``become_user`` and ``become_password`` when accessing a remote
resource. This is useful in inter-domain scenarios where there is no trust
relationship, and should be used with the ``new_credentials`` ``logon_type``.
By default ``logon_flags=with_profile`` is set, if the profile should not be
loaded set ``logon_flags=`` or if the profile should be loaded with
``netcredentials_only``, set ``logon_flags=with_profile,netcredentials_only``.
For more information, see `dwLogonFlags <https://docs.microsoft.com/en-gb/windows/desktop/api/winbase/nf-winbase-createprocesswithtokenw>`_.
Here are some examples of how to use ``become_flags`` with Windows tasks:
.. code-block:: yaml
- name: copy a file from a fileshare with custom credentials
win_copy:
src: \\server\share\data\file.txt
dest: C:\temp\file.txt
remote_src: yes
vars:
ansible_become: yes
ansible_become_method: runas
ansible_become_user: DOMAIN\user
ansible_become_password: Password01
ansible_become_flags: logon_type=new_credentials logon_flags=netcredentials_only
- name: run a command under a batch logon
win_whoami:
become: yes
become_flags: logon_type=batch
- name: run a command and not load the user profile
win_whomai:
become: yes
become_flags: logon_flags=
Limitations
-----------
Be aware of the following limitations with ``become`` on Windows:
* Running a task with ``async`` and ``become`` on Windows Server 2008, 2008 R2
and Windows 7 only works when using Ansible 2.7 or newer.
* By default, the become user logs on with an interactive session, so it must
have the right to do so on the Windows host. If it does not inherit the
``SeAllowLogOnLocally`` privilege or inherits the ``SeDenyLogOnLocally``
privilege, the become process will fail. Either add the privilege or set the
``logon_type`` flag to change the logon type used.
* Prior to Ansible version 2.3, become only worked when
``ansible_winrm_transport`` was either ``basic`` or ``credssp``. This
restriction has been lifted since the 2.4 release of Ansible for all hosts
except Windows Server 2008 (non R2 version).
* The Secondary Logon service ``seclogon`` must be running to use ``ansible_become_method: runas``
.. seealso::
`Mailing List <https://groups.google.com/forum/#!forum/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`webchat.freenode.net <https://webchat.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,562 |
RepresenterError with looped lookups in YAML callback
|
##### SUMMARY
Loading template files in a loop leads to RepresenterErrors using YAML callback
Note that in the real world, this can manifest itself in the task erroring out, although I can't reproduce that in a more minimal test case as yet - I hope that the below information is enough to eliminate the underlying issue.
Other notes:
* If I remove the trailing `-`, the task succeeds. But I added the trailing dash to counter #49579
* This works fine if I use the default stdout callback
* If there is only one element in the loop, we don't see the problem(!)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/callback/yaml.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STDOUT_CALLBACK(/Users/will/tmp/ansible/2.9.0b1/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars:
kube_resource_manifests_from_files: []
kube_resource_manifest_files:
- a.yml
- b.yml
tasks:
- name: create manifests list
set_fact:
kube_resource_manifests_from_files: >-
{{ kube_resource_manifests_from_files + lookup('template', item)| from_yaml_all | list }}
loop: "{{ kube_resource_manifest_files }}"
```
echo "hello: world" > a.yml
echo "hello: world" > b.yml
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As per 2.8.5
```
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.8.5
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" && echo ansible-tmp-1568854487.54-231484033082638="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66982kKOKoR/tmpZAoBok TO /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
ok: [localhost] => (item=a.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
ansible_loop_var: item
item: a.yml
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" && echo ansible-tmp-1568854348.48-81455736502302="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66662BWCIQr/tmp6yajej TO /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
[WARNING]: Failure using method (v2_runner_item_on_ok) in callback plugin (<ansible.plugins.callback.yaml.CallbackModule object at 0x106873210>): cannot represent an object: world
Callback Exception:
File "/Users/will/src/opensource/ansible/lib/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/default.py", line 320, in v2_runner_item_on_ok
msg += " => %s" % self._dump_results(result._result)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/yaml.py", line 123, in _dump_results
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 202, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 190, in dump_all
dumper.represent(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 28, in represent
node = self.represent_data(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 217, in represent_list
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 101, in represent_sequence
node_item = self.represent_data(item)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 67, in represent_data
node = self.yaml_representers[None](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 249, in represent_undefined
raise RepresenterError("cannot represent an object: %s" % data)
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/62562
|
https://github.com/ansible/ansible/pull/62598
|
c8e220a62e85dc817a17d0e1037c14a7a551cdff
|
4cc4c44dd00aa07d29afc92e9d5bad473d5ca98d
| 2019-09-19T00:58:29Z |
python
| 2019-09-19T18:27:48Z |
changelogs/fragments/62598-AnsibleDumper-representer.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,562 |
RepresenterError with looped lookups in YAML callback
|
##### SUMMARY
Loading template files in a loop leads to RepresenterErrors using YAML callback
Note that in the real world, this can manifest itself in the task erroring out, although I can't reproduce that in a more minimal test case as yet - I hope that the below information is enough to eliminate the underlying issue.
Other notes:
* If I remove the trailing `-`, the task succeeds. But I added the trailing dash to counter #49579
* This works fine if I use the default stdout callback
* If there is only one element in the loop, we don't see the problem(!)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/callback/yaml.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STDOUT_CALLBACK(/Users/will/tmp/ansible/2.9.0b1/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars:
kube_resource_manifests_from_files: []
kube_resource_manifest_files:
- a.yml
- b.yml
tasks:
- name: create manifests list
set_fact:
kube_resource_manifests_from_files: >-
{{ kube_resource_manifests_from_files + lookup('template', item)| from_yaml_all | list }}
loop: "{{ kube_resource_manifest_files }}"
```
echo "hello: world" > a.yml
echo "hello: world" > b.yml
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As per 2.8.5
```
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.8.5
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" && echo ansible-tmp-1568854487.54-231484033082638="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66982kKOKoR/tmpZAoBok TO /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
ok: [localhost] => (item=a.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
ansible_loop_var: item
item: a.yml
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" && echo ansible-tmp-1568854348.48-81455736502302="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66662BWCIQr/tmp6yajej TO /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
[WARNING]: Failure using method (v2_runner_item_on_ok) in callback plugin (<ansible.plugins.callback.yaml.CallbackModule object at 0x106873210>): cannot represent an object: world
Callback Exception:
File "/Users/will/src/opensource/ansible/lib/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/default.py", line 320, in v2_runner_item_on_ok
msg += " => %s" % self._dump_results(result._result)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/yaml.py", line 123, in _dump_results
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 202, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 190, in dump_all
dumper.represent(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 28, in represent
node = self.represent_data(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 217, in represent_list
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 101, in represent_sequence
node_item = self.represent_data(item)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 67, in represent_data
node = self.yaml_representers[None](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 249, in represent_undefined
raise RepresenterError("cannot represent an object: %s" % data)
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/62562
|
https://github.com/ansible/ansible/pull/62598
|
c8e220a62e85dc817a17d0e1037c14a7a551cdff
|
4cc4c44dd00aa07d29afc92e9d5bad473d5ca98d
| 2019-09-19T00:58:29Z |
python
| 2019-09-19T18:27:48Z |
lib/ansible/parsing/yaml/dumper.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import yaml
from ansible.module_utils.six import PY3
from ansible.parsing.yaml.objects import AnsibleUnicode, AnsibleSequence, AnsibleMapping, AnsibleVaultEncryptedUnicode
from ansible.utils.unsafe_proxy import AnsibleUnsafeText
from ansible.vars.hostvars import HostVars, HostVarsVars
class AnsibleDumper(yaml.SafeDumper):
'''
A simple stub class that allows us to add representers
for our overridden object types.
'''
pass
def represent_hostvars(self, data):
return self.represent_dict(dict(data))
# Note: only want to represent the encrypted data
def represent_vault_encrypted_unicode(self, data):
return self.represent_scalar(u'!vault', data._ciphertext.decode(), style='|')
if PY3:
represent_unicode = yaml.representer.SafeRepresenter.represent_str
else:
represent_unicode = yaml.representer.SafeRepresenter.represent_unicode
AnsibleDumper.add_representer(
AnsibleUnicode,
represent_unicode,
)
AnsibleDumper.add_representer(
AnsibleUnsafeText,
represent_unicode,
)
AnsibleDumper.add_representer(
HostVars,
represent_hostvars,
)
AnsibleDumper.add_representer(
HostVarsVars,
represent_hostvars,
)
AnsibleDumper.add_representer(
AnsibleSequence,
yaml.representer.SafeRepresenter.represent_list,
)
AnsibleDumper.add_representer(
AnsibleMapping,
yaml.representer.SafeRepresenter.represent_dict,
)
AnsibleDumper.add_representer(
AnsibleVaultEncryptedUnicode,
represent_vault_encrypted_unicode,
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,562 |
RepresenterError with looped lookups in YAML callback
|
##### SUMMARY
Loading template files in a loop leads to RepresenterErrors using YAML callback
Note that in the real world, this can manifest itself in the task erroring out, although I can't reproduce that in a more minimal test case as yet - I hope that the below information is enough to eliminate the underlying issue.
Other notes:
* If I remove the trailing `-`, the task succeeds. But I added the trailing dash to counter #49579
* This works fine if I use the default stdout callback
* If there is only one element in the loop, we don't see the problem(!)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/callback/yaml.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STDOUT_CALLBACK(/Users/will/tmp/ansible/2.9.0b1/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
vars:
kube_resource_manifests_from_files: []
kube_resource_manifest_files:
- a.yml
- b.yml
tasks:
- name: create manifests list
set_fact:
kube_resource_manifests_from_files: >-
{{ kube_resource_manifests_from_files + lookup('template', item)| from_yaml_all | list }}
loop: "{{ kube_resource_manifest_files }}"
```
echo "hello: world" > a.yml
echo "hello: world" > b.yml
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
As per 2.8.5
```
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.8.5
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass it's verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" && echo ansible-tmp-1568854487.54-231484033082638="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66982kKOKoR/tmpZAoBok TO /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854487.54-231484033082638/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
ok: [localhost] => (item=a.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
ansible_loop_var: item
item: a.yml
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
$ ansible-playbook play.yml -vvv -e app=xyz
ansible-playbook 2.9.0b1
config file = /Users/will/tmp/ansible/2.9.0b1/ansible.cfg
configured module search path = [u'/Users/will/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/will/src/opensource/ansible/lib/ansible
executable location = /Users/will/src/opensource/ansible/bin/ansible-playbook
python version = 2.7.16 (default, Sep 2 2019, 11:59:44) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.46.4)]
Using /Users/will/tmp/ansible/2.9.0b1/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: play.yml ****************************************************************************************************************************************************************************************************************************
1 plays in play.yml
PLAY [localhost] ******************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: will
<127.0.0.1> EXEC /bin/sh -c 'echo ~will && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" && echo ansible-tmp-1568854348.48-81455736502302="` echo /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302 `" ) && sleep 0'
Using module file /Users/will/src/opensource/ansible/lib/ansible/modules/system/setup.py
<127.0.0.1> PUT /Users/will/.ansible/tmp/ansible-local-66662BWCIQr/tmp6yajej TO /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/opt/python@2/bin/python2.7 /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/will/.ansible/tmp/ansible-tmp-1568854348.48-81455736502302/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers
TASK [create manifests list] ******************************************************************************************************************************************************************************************************************
task path: /Users/will/tmp/ansible/2.9.0b1/play.yml:10
[WARNING]: Failure using method (v2_runner_item_on_ok) in callback plugin (<ansible.plugins.callback.yaml.CallbackModule object at 0x106873210>): cannot represent an object: world
Callback Exception:
File "/Users/will/src/opensource/ansible/lib/ansible/executor/task_queue_manager.py", line 323, in send_callback
method(*new_args, **kwargs)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/default.py", line 320, in v2_runner_item_on_ok
msg += " => %s" % self._dump_results(result._result)
File "/Users/will/src/opensource/ansible/lib/ansible/plugins/callback/yaml.py", line 123, in _dump_results
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 202, in dump
return dump_all([data], stream, Dumper=Dumper, **kwds)
File "/usr/local/lib/python2.7/site-packages/yaml/__init__.py", line 190, in dump_all
dumper.represent(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 28, in represent
node = self.represent_data(data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 217, in represent_list
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 101, in represent_sequence
node_item = self.represent_data(item)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 57, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 225, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 123, in represent_mapping
node_value = self.represent_data(item_value)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 67, in represent_data
node = self.yaml_representers[None](self, data)
File "/usr/local/lib/python2.7/site-packages/yaml/representer.py", line 249, in represent_undefined
raise RepresenterError("cannot represent an object: %s" % data)
ok: [localhost] => (item=b.yml) => changed=false
ansible_facts:
kube_resource_manifests_from_files:
- hello: world
- hello: world
ansible_loop_var: item
item: b.yml
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/62562
|
https://github.com/ansible/ansible/pull/62598
|
c8e220a62e85dc817a17d0e1037c14a7a551cdff
|
4cc4c44dd00aa07d29afc92e9d5bad473d5ca98d
| 2019-09-19T00:58:29Z |
python
| 2019-09-19T18:27:48Z |
test/units/parsing/yaml/test_dumper.py
|
# coding: utf-8
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import io
from units.compat import unittest
from ansible.parsing import vault
from ansible.parsing.yaml import dumper, objects
from ansible.parsing.yaml.loader import AnsibleLoader
from units.mock.yaml_helper import YamlTestUtils
from units.mock.vault_helper import TextVaultSecret
class TestAnsibleDumper(unittest.TestCase, YamlTestUtils):
def setUp(self):
self.vault_password = "hunter42"
vault_secret = TextVaultSecret(self.vault_password)
self.vault_secrets = [('vault_secret', vault_secret)]
self.good_vault = vault.VaultLib(self.vault_secrets)
self.vault = self.good_vault
self.stream = self._build_stream()
self.dumper = dumper.AnsibleDumper
def _build_stream(self, yaml_text=None):
text = yaml_text or u''
stream = io.StringIO(text)
return stream
def _loader(self, stream):
return AnsibleLoader(stream, vault_secrets=self.vault.secrets)
def test(self):
plaintext = 'This is a string we are going to encrypt.'
avu = objects.AnsibleVaultEncryptedUnicode.from_plaintext(plaintext, vault=self.vault,
secret=vault.match_secrets(self.vault_secrets, ['vault_secret'])[0][1])
yaml_out = self._dump_string(avu, dumper=self.dumper)
stream = self._build_stream(yaml_out)
loader = self._loader(stream)
data_from_yaml = loader.get_single_data()
self.assertEqual(plaintext, data_from_yaml.data)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,938 |
mongod_replicaset: test fails with Wait for mongod to start responding port={{ item }}
|
##### SUMMARY
Time to time, mongodb fails to start and the functional test fails because of that.
e.g:
- https://app.shippable.com/github/ansible/ansible/runs/142156/73/tests
- https://app.shippable.com/github/ansible/ansible/runs/143055/73/tests
- https://app.shippable.com/github/ansible/ansible/runs/143055/67/tests
```
mongodb_replicaset-v2r_u7pa / /root/.ansible/test/tmp/mongodb_replicaset-nirs74ib-ÅÑŚÌβŁÈ/test/integration/targets/mongodb_replicaset/tasks/mongod_replicaset.yml:41 / [testhost] testhost: mongodb_replicaset : Wait for mongod to start responding port={{ item }}
failure: All items completed
```
```yaml
{
"changed": false,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 300,
"failed": true,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3001,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3001,
"msg": "Timeout when waiting for 127.0.0.1:3001"
},
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3002,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3002,
"match_groupdict": {},
"match_groups": [],
"path": null,
"port": 3002,
"search_regex": null,
"state": "started"
},
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3003,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3003,
"match_groupdict": {},
"match_groups": [],
"path": null,
"port": 3003,
"search_regex": null,
"state": "started"
}
]
}
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mongod_replicaset
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/61938
|
https://github.com/ansible/ansible/pull/62627
|
85eba9d86084f083b70a808e2c5a4fc28427ab17
|
cee55ab7187df23ffa3c577ab1cbe15334cb38fa
| 2019-09-06T15:40:59Z |
python
| 2019-09-19T21:19:48Z |
test/integration/targets/mongodb_replicaset/tasks/main.yml
|
# test code for the mongodb_replicaset module
# (c) 2019, Rhys Campbell <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# ============================================================
- name: Ensure tests home exists
file:
path: "{{ remote_tmp_dir }}/tests"
state: directory
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset1 }}"
- include_tasks: mongod_replicaset.yml
# test with yaml list
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset1 }}"
heartbeat_timeout_secs: 1
election_timeout_millis: 1000
members:
- "localhost:3001"
- "localhost:3002"
- "localhost:3003"
- name: Ensure is_primary script exists on host
copy:
src: js/is_primary.js
dest: "{{ remote_tmp_dir }}/tests/is_primary.js"
- name: Get replicaset info
command: mongo admin --eval "rs.status()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset1 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- name: Add mongodb admin user
mongodb_user:
login_host: localhost
login_port: 3001
replica_set: "{{ mongodb_replicaset1 }}"
database: admin
name: "{{ mongodb_admin_user }}"
password: "{{ mongodb_admin_password }}"
roles: ["root"]
state: present
register: mongo_admin_user
when: test_mongo_auth
- name: Murder all mongod processes
shell: pkill -{{ kill_signal }} mongod;
- name: Getting pids for mongod
pids:
name: mongod
register: pids_of_mongod
- name: Wait for all mongod processes to exit
wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ pids_of_mongod }}"
- set_fact:
current_replicaset: "{{ mongodb_replicaset1 }}"
- set_fact:
mongod_auth: true
- name: Execute mongod script to restart with auth enabled
include_tasks: mongod_replicaset.yml
- name: Validate replicaset previously created
mongodb_replicaset:
login_user: "{{ mongodb_admin_user }}"
login_password: "{{ mongodb_admin_password }}"
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset1 }}"
election_timeout_millis: 1000
members:
- "localhost:3001"
- "localhost:3002"
- "localhost:3003"
register: mongodb_replicaset
- name: Assert replicaset name has not changed
assert:
that: mongodb_replicaset.changed == False
- name: Test with bad password
mongodb_replicaset:
login_user: "{{ mongodb_admin_user }}"
login_password: XXXXXXXXXXXXXXXX
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset1 }}"
election_timeout_millis: 1000
members:
- "localhost:3001"
- "localhost:3002"
- "localhost:3003"
register: mongodb_replicaset_bad_pw
ignore_errors: True
- name: Assert login failed
assert:
that:
- "mongodb_replicaset_bad_pw.rc == 1"
- "'Authentication failed' in mongodb_replicaset_bad_pw.module_stderr"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset2 }}"
- set_fact:
mongod_auth: false
- name: Execute mongod script to restart with auth enabled
include_tasks: mongod_replicaset.yml
# Test with python style list
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset2 }}"
members: [ "localhost:3001", "localhost:3002", "localhost:3003" ]
election_timeout_millis: 1000
heartbeat_timeout_secs: 1
- name: Get replicaset info
command: mongo admin --eval "rs.status()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset2 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset3 }}"
- set_fact:
mongod_auth: false
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test with csv string
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset3 }}"
members: "localhost:3001,localhost:3002,localhost:3003"
election_timeout_millis: 1000
- name: Get replicaset info
command: mongo admin --eval "rs.status()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset3 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset4 }}"
- set_fact:
mongod_auth: false
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test with arbiter_at_index
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
arbiter_at_index: 2
replica_set: "{{ mongodb_replicaset4 }}"
members: "localhost:3001,localhost:3002,localhost:3003"
election_timeout_millis: 1000
- name: Ensure host reaches primary before proceeding 3001
command: mongo admin --port 3001 "{{ remote_tmp_dir }}/tests/is_primary.js"
- name: Get replicaset info
command: mongo admin --eval "rs.status()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset4 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'ARBITER' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset5 }}"
- set_fact:
mongod_auth: false
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test with chainingAllowed
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
chaining_allowed: no
replica_set: "{{ mongodb_replicaset5 }}"
election_timeout_millis: 1000
members:
- localhost:3001
- localhost:3002
- localhost:3003
- name: Get replicaset info
command: mongo admin --eval "rs.conf()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset5 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'chainingAllowed\" : false,' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset6 }}"
- set_fact:
mongodb_nodes: [ 3001, 3002, 3003, 3004, 3005]
- set_fact:
mongod_auth: false
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test with 5 mongod processes
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
replica_set: "{{ mongodb_replicaset6 }}"
election_timeout_millis: 1000
members:
- localhost:3001
- localhost:3002
- localhost:3003
- localhost:3004
- localhost:3005
- name: Get replicaset info
command: mongo admin --eval "rs.conf()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset6 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'localhost:3004' in mongo_output.stdout"
- "'localhost:3005' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset7 }}"
- set_fact:
mongod_auth: false
- set_fact:
mongodb_nodes: [ 3001, 3002, 3003 ]
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test withheartbeatTimeoutSecs
- name: Create replicaset with module
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
election_timeout_millis: 9999
replica_set: "{{ mongodb_replicaset7 }}"
members:
- localhost:3001
- localhost:3002
- localhost:3003
- name: Get replicaset info
command: mongo admin --eval "rs.conf()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset7 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'electionTimeoutMillis\" : 9999,' in mongo_output.stdout"
#############################################################
- include_tasks: mongod_teardown.yml
- set_fact:
current_replicaset: "{{ mongodb_replicaset8 }}"
- name: Launch mongod processes
include_tasks: mongod_replicaset.yml
# Test with heartbeatTimeoutSecs
- name: Create replicaset with module protocolVersion 0 (Mongodb 3.0)
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
protocol_version: 0
heartbeat_timeout_secs: 9
replica_set: "{{ mongodb_replicaset8 }}"
election_timeout_millis: 1000
members:
- localhost:3001
- localhost:3002
- localhost:3003
when: mongodb_version.startswith('3') == True
- name: Create replicaset with module protocolVersion 1 (MongoDB 4.0+)
mongodb_replicaset:
login_user: admin
login_password: secret
login_host: "localhost"
login_port: 3001
login_database: "admin"
protocol_version: 1
election_timeout_millis: 9000
replica_set: "{{ mongodb_replicaset8 }}"
members:
- localhost:3001
- localhost:3002
- localhost:3003
when: mongodb_version.startswith('4') == True
- name: Get replicaset info
command: mongo admin --eval "rs.conf()" --port 3001
register: mongo_output
- name: Assert replicaset name is in mongo_output MongoDB 3.0+
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset8 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'heartbeatTimeoutSecs\" : 9,' in mongo_output.stdout"
when: mongodb_version.startswith('3') == True
- name: Assert replicaset name is in mongo_output MongoDB 4.0+
assert:
that:
- "mongo_output.changed == true"
- "'{{ mongodb_replicaset8 }}' in mongo_output.stdout"
- "'localhost:3001' in mongo_output.stdout"
- "'localhost:3002' in mongo_output.stdout"
- "'localhost:3003' in mongo_output.stdout"
- "'electionTimeoutMillis\" : 9000,' in mongo_output.stdout"
when: mongodb_version.startswith('4') == True
# TODO - Readd this test once we support serverSelectionTimeoutMS / connectTimeoutMS
#- name: Run test with unknown host
# mongodb_replicaset:
# login_user: admin
# login_password: secret
# login_host: "idonotexist"
# login_port: 3001
# login_database: "admin"
# protocol_version: 0
# heartbeat_timeout_secs: 9
# replica_set: "{{ mongodb_replicaset8 }}"
# election_timeout_millis: 1000
# members:
# - idonotexist:3001
# - idonotexist:3002
# - idonotexist:3003
# ignore_errors: True
# register: host_does_not_exist
#- name: Assert that "Name or service not known" is in error
# assert:
# that:
# - "host_does_not_exist.rc == 1"
# - "'Name or service not known' in host_does_not_exist.module_stderr"
# Final clean up to prevent "directory not empty" error
- include_tasks: mongod_teardown.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,938 |
mongod_replicaset: test fails with Wait for mongod to start responding port={{ item }}
|
##### SUMMARY
Time to time, mongodb fails to start and the functional test fails because of that.
e.g:
- https://app.shippable.com/github/ansible/ansible/runs/142156/73/tests
- https://app.shippable.com/github/ansible/ansible/runs/143055/73/tests
- https://app.shippable.com/github/ansible/ansible/runs/143055/67/tests
```
mongodb_replicaset-v2r_u7pa / /root/.ansible/test/tmp/mongodb_replicaset-nirs74ib-ÅÑŚÌβŁÈ/test/integration/targets/mongodb_replicaset/tasks/mongod_replicaset.yml:41 / [testhost] testhost: mongodb_replicaset : Wait for mongod to start responding port={{ item }}
failure: All items completed
```
```yaml
{
"changed": false,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 300,
"failed": true,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3001,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3001,
"msg": "Timeout when waiting for 127.0.0.1:3001"
},
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3002,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3002,
"match_groupdict": {},
"match_groups": [],
"path": null,
"port": 3002,
"search_regex": null,
"state": "started"
},
{
"ansible_loop_var": "item",
"changed": false,
"elapsed": 0,
"failed": false,
"invocation": {
"module_args": {
"active_connection_states": [
"ESTABLISHED",
"FIN_WAIT1",
"FIN_WAIT2",
"SYN_RECV",
"SYN_SENT",
"TIME_WAIT"
],
"connect_timeout": 5,
"delay": 0,
"exclude_hosts": null,
"host": "127.0.0.1",
"msg": null,
"path": null,
"port": 3003,
"search_regex": null,
"sleep": 1,
"state": "started",
"timeout": 300
}
},
"item": 3003,
"match_groupdict": {},
"match_groups": [],
"path": null,
"port": 3003,
"search_regex": null,
"state": "started"
}
]
}
```
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mongod_replicaset
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/61938
|
https://github.com/ansible/ansible/pull/62627
|
85eba9d86084f083b70a808e2c5a4fc28427ab17
|
cee55ab7187df23ffa3c577ab1cbe15334cb38fa
| 2019-09-06T15:40:59Z |
python
| 2019-09-19T21:19:48Z |
test/integration/targets/mongodb_replicaset/tasks/mongod_replicaset.yml
|
- name: Set mongodb_user user for redhat
set_fact:
mongodb_user: "mongod"
when: ansible_os_family == "RedHat"
- name: Create directories for mongod processes
file:
path: "{{ remote_tmp_dir }}/mongod{{ item }}"
state: directory
owner: "{{ mongodb_user }}"
group: "{{ mongodb_user }}"
mode: 0755
recurse: yes
with_items: "{{ mongodb_nodes }}"
- name: Create keyfile
copy:
dest: "{{ remote_tmp_dir }}/my.key"
content: |
fd2CUrbXBJpB4rt74A6F
owner: "{{ mongodb_user }}"
group: "{{ mongodb_user }}"
mode: 0600
when: mongod_auth == True
- name: Spawn mongod process without auth
command: mongod --shardsvr --smallfiles {{ mongod_storage_engine_opts }} --dbpath mongod{{ item }} --port {{ item }} --replSet {{ current_replicaset }} --logpath mongod{{ item }}/log.log --fork
args:
chdir: "{{ remote_tmp_dir }}"
with_items: "{{ mongodb_nodes | sort }}"
when: mongod_auth == False
- name: Spawn mongod process with auth
command: mongod --shardsvr --smallfiles {{ mongod_storage_engine_opts }} --dbpath mongod{{ item }} --port {{ item }} --replSet {{ current_replicaset }} --logpath mongod{{ item }}/log.log --fork --auth --keyFile my.key
args:
chdir: "{{ remote_tmp_dir }}"
with_items: "{{ mongodb_nodes | sort }}"
when: mongod_auth == True
ignore_errors: yes
- name: Wait for mongod to start responding
wait_for:
port: "{{ item }}"
with_items: "{{ mongodb_nodes }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,521 |
win_package path doesn't accept special characters
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_package doesn't accept special characters in path such as '=' and '[]'. Test-Path command needs to use -LiteralPath instead of -Path. EXE cannot be renamed due to the install process. This is an off the shelf product and we can't modify it.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_package
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
Windows Server 2016, Windows Server 2019
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Install an exe with characters such as '=' and '[]'.
- name: Install package
win_package:
path: 'C:\setupdownloader_[aHR0cHM6Ly9iaXQtZGVmBjL2luc3RhbGxlci54bWw-bGFuZz1lbi1VUw==].exe'
state: present
log_path: '{{ DownloadFolder }}\package_install.log'
##### EXPECTED RESULTS
Package should install.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
{
"_ansible_no_log": false,
"msg": "the file at the local path \"C:\\setupdownloader_[aHR0cHM6Ly9iaXQtZGVmBjL2luc3RhbGxlci54bWw-bGFuZz1lbi1VUw==].exe\" cannot be reached",
"changed": false,
"reboot_required": false
}
|
https://github.com/ansible/ansible/issues/62521
|
https://github.com/ansible/ansible/pull/62626
|
2a206f0e4c8cfdb431bd2b5f98992cb839e4a975
|
153a322f54d2398973286bc6f86defeb101b57ae
| 2019-09-18T14:53:49Z |
python
| 2019-09-20T03:25:47Z |
lib/ansible/modules/windows/win_package.ps1
|
#!powershell
# Copyright: (c) 2014, Trond Hindenes <[email protected]>, and others
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
#Requires -Module Ansible.ModuleUtils.CommandUtil
#Requires -Module Ansible.ModuleUtils.ArgvParser
$ErrorActionPreference = 'Stop'
$params = Parse-Args -arguments $args -supports_check_mode $true
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
$arguments = Get-AnsibleParam -obj $params -name "arguments"
$expected_return_code = Get-AnsibleParam -obj $params -name "expected_return_code" -type "list" -default @(0, 3010)
$path = Get-AnsibleParam -obj $params -name "path" -type "str"
$chdir = Get-AnsibleParam -obj $params -name "chdir" -type "path"
$product_id = Get-AnsibleParam -obj $params -name "product_id" -type "str" -aliases "productid"
$state = Get-AnsibleParam -obj $params -name "state" -type "str" -default "present" -validateset "absent","present" -aliases "ensure"
$username = Get-AnsibleParam -obj $params -name "username" -type "str" -aliases "user_name"
$password = Get-AnsibleParam -obj $params -name "password" -type "str" -failifempty ($null -ne $username) -aliases "user_password"
$validate_certs = Get-AnsibleParam -obj $params -name "validate_certs" -type "bool" -default $true
$creates_path = Get-AnsibleParam -obj $params -name "creates_path" -type "path"
$creates_version = Get-AnsibleParam -obj $params -name "creates_version" -type "str"
$creates_service = Get-AnsibleParam -obj $params -name "creates_service" -type "str"
$log_path = Get-AnsibleParam -obj $params -name "log_path" -type "path"
$result = @{
changed = $false
reboot_required = $false
}
if ($null -ne $arguments) {
# convert a list to a string and escape the values
if ($arguments -is [array]) {
$arguments = Argv-ToString -arguments $arguments
}
}
if (-not $validate_certs) {
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
}
# Enable TLS1.1/TLS1.2 if they're available but disabled (eg. .NET 4.5)
$security_protcols = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::SystemDefault
if ([Net.SecurityProtocolType].GetMember("Tls11").Count -gt 0) {
$security_protcols = $security_protcols -bor [Net.SecurityProtocolType]::Tls11
}
if ([Net.SecurityProtocolType].GetMember("Tls12").Count -gt 0) {
$security_protcols = $security_protcols -bor [Net.SecurityProtocolType]::Tls12
}
[Net.ServicePointManager]::SecurityProtocol = $security_protcols
$credential = $null
if ($null -ne $username) {
$sec_user_password = ConvertTo-SecureString -String $password -AsPlainText -Force
$credential = New-Object -TypeName PSCredential -ArgumentList $username, $sec_user_password
}
$valid_return_codes = @()
foreach ($rc in ($expected_return_code)) {
try {
$int_rc = [Int32]::Parse($rc)
$valid_return_codes += $int_rc
} catch {
Fail-Json -obj $result -message "failed to parse expected return code $rc as an integer"
}
}
if ($null -eq $path) {
if (-not ($state -eq "absent" -and $null -ne $product_id)) {
Fail-Json -obj $result -message "path can only be null when state=absent and product_id is not null"
}
}
if ($null -ne $creates_version -and $null -eq $creates_path) {
Fail-Json -obj $result -Message "creates_path must be set when creates_version is set"
}
$msi_tools = @"
using System;
using System.Runtime.InteropServices;
using System.Text;
namespace Ansible {
public static class MsiTools {
[DllImport("msi.dll", CharSet = CharSet.Unicode, PreserveSig = true, SetLastError = true, ExactSpelling = true)]
private static extern UInt32 MsiOpenPackageW(string szPackagePath, out IntPtr hProduct);
[DllImport("msi.dll", CharSet = CharSet.Unicode, PreserveSig = true, SetLastError = true, ExactSpelling = true)]
private static extern uint MsiCloseHandle(IntPtr hAny);
[DllImport("msi.dll", CharSet = CharSet.Unicode, PreserveSig = true, SetLastError = true, ExactSpelling = true)]
private static extern uint MsiGetPropertyW(IntPtr hAny, string name, StringBuilder buffer, ref int bufferLength);
public static string GetPackageProperty(string msi, string property) {
IntPtr MsiHandle = IntPtr.Zero;
try {
uint res = MsiOpenPackageW(msi, out MsiHandle);
if (res != 0)
return null;
int length = 256;
var buffer = new StringBuilder(length);
res = MsiGetPropertyW(MsiHandle, property, buffer, ref length);
return buffer.ToString();
} finally {
if (MsiHandle != IntPtr.Zero)
MsiCloseHandle(MsiHandle);
}
}
}
}
"@
Add-Type -TypeDefinition @"
public enum LocationType {
Empty,
Local,
Unc,
Http
}
"@
Function Download-File($url, $path) {
$web_client = New-Object -TypeName System.Net.WebClient
try {
$web_client.DownloadFile($url, $path)
} catch {
Fail-Json -obj $result -message "failed to download $url to $($path): $($_.Exception.Message)"
}
}
Function Test-RegistryProperty($path, $name) {
# will validate if the registry key contains the property, returns true
# if the property exists and false if the property does not
try {
$value = (Get-Item -Path $path).GetValue($name)
# need to do it this way return ($null -eq $value) does not work
if ($null -eq $value) {
return $false
} else {
return $true
}
} catch [System.Management.Automation.ItemNotFoundException] {
# key didn't exist so the property mustn't
return $false
}
}
Function Get-ProgramMetadata($state, $path, $product_id, [PSCredential]$credential, $creates_path, $creates_version, $creates_service) {
# will get some metadata about the program we are trying to install or remove
$metadata = @{
installed = $false
product_id = $null
location_type = $null
msi = $false
uninstall_string = $null
path_error = $null
}
# set the location type and validate the path
if ($null -ne $path) {
if ($path.EndsWith(".msi", [System.StringComparison]::CurrentCultureIgnoreCase)) {
$metadata.msi = $true
} else {
$metadata.msi = $false
}
if ($path.StartsWith("http")) {
$metadata.location_type = [LocationType]::Http
try {
Invoke-WebRequest -Uri $path -DisableKeepAlive -UseBasicParsing -Method HEAD | Out-Null
} catch {
$metadata.path_error = "the file at the URL $path cannot be reached: $($_.Exception.Message)"
}
} elseif ($path.StartsWith("/") -or $path.StartsWith("\\")) {
$metadata.location_type = [LocationType]::Unc
if ($null -ne $credential) {
# Test-Path doesn't support supplying -Credentials, need to create PSDrive before testing
$file_path = Split-Path -Path $path
$file_name = Split-Path -Path $path -Leaf
try {
New-PSDrive -Name win_package -PSProvider FileSystem -Root $file_path -Credential $credential -Scope Script
} catch {
Fail-Json -obj $result -message "failed to connect network drive with credentials: $($_.Exception.Message)"
}
$test_path = "win_package:\$file_name"
} else {
# Someone is using an auth that supports credential delegation, at least it will fail otherwise
$test_path = $path
}
$valid_path = Test-Path -Path $test_path -PathType Leaf
if ($valid_path -ne $true) {
$metadata.path_error = "the file at the UNC path $path cannot be reached, ensure the user_name account has access to this path or use an auth transport with credential delegation"
}
} else {
$metadata.location_type = [LocationType]::Local
$valid_path = Test-Path -Path $path -PathType Leaf
if ($valid_path -ne $true) {
$metadata.path_error = "the file at the local path $path cannot be reached"
}
}
} else {
# should only occur when state=absent and product_id is not null, we can get the uninstall string from the reg value
$metadata.location_type = [LocationType]::Empty
}
# try and get the product id
if ($null -ne $product_id) {
$metadata.product_id = $product_id
} else {
# we can get the product_id if the path is an msi and is either a local file or unc file with credential delegation
if (($metadata.msi -eq $true) -and (($metadata.location_type -eq [LocationType]::Local) -or ($metadata.location_type -eq [LocationType]::Unc -and $null -eq $credential))) {
Add-Type -TypeDefinition $msi_tools
try {
$metadata.product_id = [Ansible.MsiTools]::GetPackageProperty($path, "ProductCode")
} catch {
Fail-Json -obj $result -message "failed to get product_id from MSI at $($path): $($_.Exception.Message)"
}
} elseif ($null -eq $creates_path -and $null -eq $creates_service) {
# we need to fail without the product id at this point
Fail-Json $result "product_id is required when the path is not an MSI or the path is an MSI but not local"
}
}
if ($null -ne $metadata.product_id) {
$uninstall_key = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\$($metadata.product_id)"
$uninstall_key_wow64 = "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\$($metadata.product_id)"
if (Test-Path -Path $uninstall_key) {
$metadata.installed = $true
} elseif (Test-Path -Path $uninstall_key_wow64) {
$metadata.installed = $true
$uninstall_key = $uninstall_key_wow64
}
# if the reg key exists, try and get the uninstall string and check if it is an MSI
if ($metadata.installed -eq $true -and $metadata.location_type -eq [LocationType]::Empty) {
if (Test-RegistryProperty -path $uninstall_key -name "UninstallString") {
$metadata.uninstall_string = (Get-ItemProperty -Path $uninstall_key -Name "UninstallString").UninstallString
if ($metadata.uninstall_string.StartsWith("MsiExec")) {
$metadata.msi = $true
}
}
}
}
# use the creates_* to determine if the program is installed
if ($null -ne $creates_path) {
$path_exists = Test-Path -Path $creates_path
$metadata.installed = $path_exists
if ($null -ne $creates_version -and $path_exists -eq $true) {
if (Test-Path -Path $creates_path -PathType Leaf) {
$existing_version = [System.Diagnostics.FileVersionInfo]::GetVersionInfo($creates_path).FileVersion
$version_matched = $creates_version -eq $existing_version
$metadata.installed = $version_matched
} else {
Fail-Json -obj $result -message "creates_path must be a file not a directory when creates_version is set"
}
}
}
if ($null -ne $creates_service) {
$existing_service = Get-Service -Name $creates_service -ErrorAction SilentlyContinue
$service_exists = $null -ne $existing_service
$metadata.installed = $service_exists
}
# finally throw error if path is not valid unless we want to uninstall the package and it already is
if ($null -ne $metadata.path_error -and (-not ($state -eq "absent" -and $metadata.installed -eq $false))) {
Fail-Json -obj $result -message $metadata.path_error
}
return $metadata
}
Function Convert-Encoding($string) {
# this will attempt to detect UTF-16 encoding and convert to UTF-8 for
# processes like msiexec
$bytes = ([System.Text.Encoding]::Default).GetBytes($string)
$is_utf16 = $true
for ($i = 0; $i -lt $bytes.Count; $i = $i + 2) {
$char = $bytes[$i + 1]
if ($char -ne [byte]0) {
$is_utf16 = $false
break
}
}
if ($is_utf16 -eq $true) {
return ([System.Text.Encoding]::Unicode).GetString($bytes)
} else {
return $string
}
}
$program_metadata = Get-ProgramMetadata -state $state -path $path -product_id $product_id -credential $credential -creates_path $creates_path -creates_version $creates_version -creates_service $creates_service
if ($state -eq "absent") {
if ($program_metadata.installed -eq $true) {
# artifacts we create that must be cleaned up
$cleanup_artifacts = @()
try {
# If path is on a network and we specify credentials or path is a
# URL and not an MSI we need to get a temp local copy
if ($program_metadata.location_type -eq [LocationType]::Unc -and $null -ne $credential) {
$file_name = Split-Path -Path $path -Leaf
$local_path = [System.IO.Path]::GetRandomFileName()
Copy-Item -Path "win_package:\$file_name" -Destination $local_path -WhatIf:$check_mode
$cleanup_artifacts += $local_path
} elseif ($program_metadata.location_type -eq [LocationType]::Http -and $program_metadata.msi -ne $true) {
$local_path = [System.IO.Path]::GetRandomFileName()
if (-not $check_mode) {
Download-File -url $path -path $local_path
}
$cleanup_artifacts += $local_path
} elseif ($program_metadata.location_type -eq [LocationType]::Empty -and $program_metadata.msi -ne $true) {
# TODO validate the uninstall_string to see if there are extra args in there
$local_path = $program_metadata.uninstall_string
} else {
$local_path = $path
}
if ($program_metadata.msi -eq $true) {
# we are uninstalling an msi
if ( -Not $log_path ) {
$temp_path = [System.IO.Path]::GetTempPath()
$log_file = [System.IO.Path]::GetRandomFileName()
$log_path = Join-Path -Path $temp_path -ChildPath $log_file
$cleanup_artifacts += $log_path
}
if ($null -ne $program_metadata.product_id) {
$id = $program_metadata.product_id
} else {
$id = $local_path
}
$uninstall_arguments = @("$env:windir\system32\msiexec.exe", "/x", $id, "/L*V", $log_path, "/qn", "/norestart")
} else {
$log_path = $null
$uninstall_arguments = @($local_path)
}
if (-not $check_mode) {
$command_args = @{
command = Argv-ToString -arguments $uninstall_arguments
}
if ($null -ne $arguments) {
$command_args['command'] += " $arguments"
}
if ($chdir) {
$command_args['working_directory'] = $chdir
}
try {
$process_result = Run-Command @command_args
} catch {
Fail-Json -obj $result -message "failed to run uninstall process ($($command_args['command'])): $($_.Exception.Message)"
}
if (($null -ne $log_path) -and (Test-Path -Path $log_path)) {
$log_content = Get-Content -Path $log_path | Out-String
} else {
$log_content = $null
}
$result.rc = $process_result.rc
if ($valid_return_codes -notcontains $process_result.rc) {
$result.stdout = Convert-Encoding -string $process_result.stdout
$result.stderr = Convert-Encoding -string $process_result.stderr
if ($null -ne $log_content) {
$result.log = $log_content
}
Fail-Json -obj $result -message "unexpected rc from uninstall $uninstall_exe $($uninstall_arguments): see rc, stdout and stderr for more details"
} else {
$result.failed = $false
}
if ($process_result.rc -eq 3010) {
$result.reboot_required = $true
}
}
} finally {
# make sure we cleanup any remaining artifacts
foreach ($cleanup_artifact in $cleanup_artifacts) {
if (Test-Path -Path $cleanup_artifact) {
Remove-Item -Path $cleanup_artifact -Recurse -Force -WhatIf:$check_mode
}
}
}
$result.changed = $true
}
} else {
if ($program_metadata.installed -eq $false) {
# artifacts we create that must be cleaned up
$cleanup_artifacts = @()
try {
# If path is on a network and we specify credentials or path is a
# URL and not an MSI we need to get a temp local copy
if ($program_metadata.location_type -eq [LocationType]::Unc -and $null -ne $credential) {
$file_name = Split-Path -Path $path -Leaf
$local_path = [System.IO.Path]::GetRandomFileName()
Copy-Item -Path "win_package:\$file_name" -Destination $local_path -WhatIf:$check_mode
$cleanup_artifacts += $local_path
} elseif ($program_metadata.location_type -eq [LocationType]::Http -and $program_metadata.msi -ne $true) {
$local_path = [System.IO.Path]::GetRandomFileName()
if (-not $check_mode) {
Download-File -url $path -path $local_path
}
$cleanup_artifacts += $local_path
} else {
$local_path = $path
}
if ($program_metadata.msi -eq $true) {
# we are installing an msi
if ( -Not $log_path ) {
$temp_path = [System.IO.Path]::GetTempPath()
$log_file = [System.IO.Path]::GetRandomFileName()
$log_path = Join-Path -Path $temp_path -ChildPath $log_file
$cleanup_artifacts += $log_path
}
$install_arguments = @("$env:windir\system32\msiexec.exe", "/i", $local_path, "/L*V", $log_path, "/qn", "/norestart")
} else {
$log_path = $null
$install_arguments = @($local_path)
}
if (-not $check_mode) {
$command_args = @{
command = Argv-ToString -arguments $install_arguments
}
if ($null -ne $arguments) {
$command_args['command'] += " $arguments"
}
if ($chdir) {
$command_args['working_directory'] = $chdir
}
try {
$process_result = Run-Command @command_args
} catch {
Fail-Json -obj $result -message "failed to run install process ($($command_args['command'])): $($_.Exception.Message)"
}
if (($null -ne $log_path) -and (Test-Path -Path $log_path)) {
$log_content = Get-Content -Path $log_path | Out-String
} else {
$log_content = $null
}
$result.rc = $process_result.rc
if ($valid_return_codes -notcontains $process_result.rc) {
$result.stdout = Convert-Encoding -string $process_result.stdout
$result.stderr = Convert-Encoding -string $process_result.stderr
if ($null -ne $log_content) {
$result.log = $log_content
}
Fail-Json -obj $result -message "unexpected rc from install $install_exe $($install_arguments): see rc, stdout and stderr for more details"
} else {
$result.failed = $false
}
if ($process_result.rc -eq 3010) {
$result.reboot_required = $true
}
}
} finally {
# make sure we cleanup any remaining artifacts
foreach ($cleanup_artifact in $cleanup_artifacts) {
if (Test-Path -Path $cleanup_artifact) {
Remove-Item -Path $cleanup_artifact -Recurse -Force -WhatIf:$check_mode
}
}
}
$result.changed = $true
}
}
Exit-Json -obj $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,564 |
azure_rm_virtualmachinescalesetinstance_facts should return computer name
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Each VM instance in an Azure VM Scale Set (VMSS) has a computer name (hostname) which is distinct from the instance name. Currently azure_rm_virtualmachinescalesetinstance_facts returns the instance name but not the computer name. It would be extremely helpful if the module can return the computer name as well.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_virtualmachinescalesetinstance_facts
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
We are using this module in [Muchos](https://github.com/apache/fluo-muchos). If azure_rm_virtualmachinescalesetinstance_facts returned the computer name as well, it would help us populate etc/hosts and other related files much more seamlessly.
|
https://github.com/ansible/ansible/issues/62564
|
https://github.com/ansible/ansible/pull/62566
|
153a322f54d2398973286bc6f86defeb101b57ae
|
41bfd2bf0e7235e757af9377572798151e90ae44
| 2019-09-19T01:45:08Z |
python
| 2019-09-20T04:16:58Z |
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_info.py
|
#!/usr/bin/python
#
# Copyright (c) 2019 Zim Kalinowski, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_virtualmachinescalesetinstance_info
version_added: "2.9"
short_description: Get Azure Virtual Machine Scale Set Instance facts
description:
- Get facts of Azure Virtual Machine Scale Set VMs.
options:
resource_group:
description:
- The name of the resource group.
required: True
vmss_name:
description:
- The name of the VM scale set.
required: True
instance_id:
description:
- The instance ID of the virtual machine.
tags:
description:
- Limit results by providing a list of tags. Format tags as 'key' or 'key:value'.
extends_documentation_fragment:
- azure
author:
- Zim Kalinowski (@zikalino)
'''
EXAMPLES = '''
- name: List VM instances in Virtual Machine ScaleSet
azure_rm_virtualmachinescalesetinstance_info:
resource_group: myResourceGroup
vmss_name: myVMSS
'''
RETURN = '''
instances:
description:
- A list of dictionaries containing facts for Virtual Machine Scale Set VM.
returned: always
type: complex
contains:
id:
description:
- Resource ID.
returned: always
type: str
sample: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/my
VMSS/virtualMachines/2"
tags:
description:
- Resource tags.
returned: always
type: dict
sample: { 'tag1': 'abc' }
instance_id:
description:
- Virtual Machine instance ID.
returned: always
type: str
sample: 0
name:
description:
- Virtual Machine name.
returned: always
type: str
sample: myVMSS_2
latest_model:
description:
- Whether applied latest model.
returned: always
type: bool
sample: True
provisioning_state:
description:
- Provisioning state of the Virtual Machine.
returned: always
type: str
sample: Succeeded
power_state:
description:
- Provisioning state of the Virtual Machine's power.
returned: always
type: str
sample: running
vm_id:
description:
- Virtual Machine ID
returned: always
type: str
sample: 94a141a9-4530-46ac-b151-2c7ff09aa823
'''
from ansible.module_utils.azure_rm_common import AzureRMModuleBase
try:
from msrestazure.azure_exceptions import CloudError
from azure.mgmt.compute import ComputeManagementClient
from msrest.serialization import Model
except ImportError:
# This is handled in azure_rm_common
pass
class AzureRMVirtualMachineScaleSetVMInfo(AzureRMModuleBase):
def __init__(self):
# define user inputs into argument
self.module_arg_spec = dict(
resource_group=dict(
type='str',
required=True
),
vmss_name=dict(
type='str',
required=True
),
instance_id=dict(
type='str'
),
tags=dict(
type='list'
)
)
# store the results of the module operation
self.results = dict(
changed=False
)
self.mgmt_client = None
self.resource_group = None
self.vmss_name = None
self.instance_id = None
self.tags = None
super(AzureRMVirtualMachineScaleSetVMInfo, self).__init__(self.module_arg_spec, supports_tags=False)
def exec_module(self, **kwargs):
is_old_facts = self.module._name == 'azure_rm_virtualmachinescalesetinstance_facts'
if is_old_facts:
self.module.deprecate("The 'azure_rm_virtualmachinescalesetinstance_facts' module has been renamed to" +
" 'azure_rm_virtualmachinescalesetinstance_info'",
version='2.13')
for key in self.module_arg_spec:
setattr(self, key, kwargs[key])
self.mgmt_client = self.get_mgmt_svc_client(ComputeManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
if (self.instance_id is None):
self.results['instances'] = self.list()
else:
self.results['instances'] = self.get()
return self.results
def get(self):
response = None
results = []
try:
response = self.mgmt_client.virtual_machine_scale_set_vms.get(resource_group_name=self.resource_group,
vm_scale_set_name=self.vmss_name,
instance_id=self.instance_id)
self.log("Response : {0}".format(response))
except CloudError as e:
self.log('Could not get facts for Virtual Machine Scale Set VM.')
if response and self.has_tags(response.tags, self.tags):
results.append(self.format_response(response))
return results
def list(self):
items = None
try:
items = self.mgmt_client.virtual_machine_scale_set_vms.list(resource_group_name=self.resource_group,
virtual_machine_scale_set_name=self.vmss_name)
self.log("Response : {0}".format(items))
except CloudError as e:
self.log('Could not get facts for Virtual Machine ScaleSet VM.')
results = []
for item in items:
if self.has_tags(item.tags, self.tags):
results.append(self.format_response(item))
return results
def format_response(self, item):
d = item.as_dict()
iv = self.mgmt_client.virtual_machine_scale_set_vms.get_instance_view(resource_group_name=self.resource_group,
vm_scale_set_name=self.vmss_name,
instance_id=d.get('instance_id', None)).as_dict()
power_state = ""
for index in range(len(iv['statuses'])):
code = iv['statuses'][index]['code'].split('/')
if code[0] == 'PowerState':
power_state = code[1]
break
d = {
'resource_group': self.resource_group,
'id': d.get('id', None),
'tags': d.get('tags', None),
'instance_id': d.get('instance_id', None),
'latest_model': d.get('latest_model_applied', None),
'name': d.get('name', None),
'provisioning_state': d.get('provisioning_state', None),
'power_state': power_state,
'vm_id': d.get('vm_id', None)
}
return d
def main():
AzureRMVirtualMachineScaleSetVMInfo()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
changelogs/fragments/split-host-pattern-empty-strings.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
lib/ansible/inventory/manager.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import os
import sys
import re
import itertools
import traceback
from operator import attrgetter
from random import shuffle
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleOptionsError, AnsibleParserError
from ansible.inventory.data import InventoryData
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes, to_text
from ansible.parsing.utils.addresses import parse_address
from ansible.plugins.loader import inventory_loader
from ansible.utils.helpers import deduplicate_list
from ansible.utils.path import unfrackpath
from ansible.utils.display import Display
display = Display()
IGNORED_ALWAYS = [br"^\.", b"^host_vars$", b"^group_vars$", b"^vars_plugins$"]
IGNORED_PATTERNS = [to_bytes(x) for x in C.INVENTORY_IGNORE_PATTERNS]
IGNORED_EXTS = [b'%s$' % to_bytes(re.escape(x)) for x in C.INVENTORY_IGNORE_EXTS]
IGNORED = re.compile(b'|'.join(IGNORED_ALWAYS + IGNORED_PATTERNS + IGNORED_EXTS))
PATTERN_WITH_SUBSCRIPT = re.compile(
r'''^
(.+) # A pattern expression ending with...
\[(?: # A [subscript] expression comprising:
(-?[0-9]+)| # A single positive or negative number
([0-9]+)([:-]) # Or an x:y or x: range.
([0-9]*)
)\]
$
''', re.X
)
def order_patterns(patterns):
''' takes a list of patterns and reorders them by modifier to apply them consistently '''
# FIXME: this goes away if we apply patterns incrementally or by groups
pattern_regular = []
pattern_intersection = []
pattern_exclude = []
for p in patterns:
if not p:
continue
if p[0] == "!":
pattern_exclude.append(p)
elif p[0] == "&":
pattern_intersection.append(p)
else:
pattern_regular.append(p)
# if no regular pattern was given, hence only exclude and/or intersection
# make that magically work
if pattern_regular == []:
pattern_regular = ['all']
# when applying the host selectors, run those without the "&" or "!"
# first, then the &s, then the !s.
return pattern_regular + pattern_intersection + pattern_exclude
def split_host_pattern(pattern):
"""
Takes a string containing host patterns separated by commas (or a list
thereof) and returns a list of single patterns (which may not contain
commas). Whitespace is ignored.
Also accepts ':' as a separator for backwards compatibility, but it is
not recommended due to the conflict with IPv6 addresses and host ranges.
Example: 'a,b[1], c[2:3] , d' -> ['a', 'b[1]', 'c[2:3]', 'd']
"""
if isinstance(pattern, list):
return list(itertools.chain(*map(split_host_pattern, pattern)))
elif not isinstance(pattern, string_types):
pattern = to_text(pattern, errors='surrogate_or_strict')
# If it's got commas in it, we'll treat it as a straightforward
# comma-separated list of patterns.
if u',' in pattern:
patterns = pattern.split(u',')
# If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
else:
try:
(base, port) = parse_address(pattern, allow_ranges=True)
patterns = [pattern]
except Exception:
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# compatibility.
patterns = re.findall(
to_text(r'''(?: # We want to match something comprising:
[^\s:\[\]] # (anything other than whitespace or ':[]'
| # ...or...
\[[^\]]*\] # a single complete bracketed expression)
)+ # occurring once or more
'''), pattern, re.X
)
return [p.strip() for p in patterns]
class InventoryManager(object):
''' Creates and manages inventory '''
def __init__(self, loader, sources=None):
# base objects
self._loader = loader
self._inventory = InventoryData()
# a list of host(names) to contain current inquiries to
self._restriction = None
self._subset = None
# caches
self._hosts_patterns_cache = {} # resolved full patterns
self._pattern_cache = {} # resolved individual patterns
# the inventory dirs, files, script paths or lists of hosts
if sources is None:
self._sources = []
elif isinstance(sources, string_types):
self._sources = [sources]
else:
self._sources = sources
# get to work!
self.parse_sources(cache=True)
@property
def localhost(self):
return self._inventory.localhost
@property
def groups(self):
return self._inventory.groups
@property
def hosts(self):
return self._inventory.hosts
def add_host(self, host, group=None, port=None):
return self._inventory.add_host(host, group, port)
def add_group(self, group):
return self._inventory.add_group(group)
def get_groups_dict(self):
return self._inventory.get_groups_dict()
def reconcile_inventory(self):
self.clear_caches()
return self._inventory.reconcile_inventory()
def get_host(self, hostname):
return self._inventory.get_host(hostname)
def _fetch_inventory_plugins(self):
''' sets up loaded inventory plugins for usage '''
display.vvvv('setting up inventory plugins')
plugins = []
for name in C.INVENTORY_ENABLED:
plugin = inventory_loader.get(name)
if plugin:
plugins.append(plugin)
else:
display.warning('Failed to load inventory plugin, skipping %s' % name)
if not plugins:
raise AnsibleError("No inventory plugins available to generate inventory, make sure you have at least one whitelisted.")
return plugins
def parse_sources(self, cache=False):
''' iterate over inventory sources and parse each one to populate it'''
parsed = False
# allow for multiple inventory parsing
for source in self._sources:
if source:
if ',' not in source:
source = unfrackpath(source, follow=False)
parse = self.parse_source(source, cache=cache)
if parse and not parsed:
parsed = True
if parsed:
# do post processing
self._inventory.reconcile_inventory()
else:
if C.INVENTORY_UNPARSED_IS_FAILED:
raise AnsibleError("No inventory was parsed, please check your configuration and options.")
else:
display.warning("No inventory was parsed, only implicit localhost is available")
def parse_source(self, source, cache=False):
''' Generate or update inventory for the source provided '''
parsed = False
display.debug(u'Examining possible inventory source: %s' % source)
# use binary for path functions
b_source = to_bytes(source)
# process directories as a collection of inventories
if os.path.isdir(b_source):
display.debug(u'Searching for inventory files in directory: %s' % source)
for i in sorted(os.listdir(b_source)):
display.debug(u'Considering %s' % i)
# Skip hidden files and stuff we explicitly ignore
if IGNORED.search(i):
continue
# recursively deal with directory entries
fullpath = to_text(os.path.join(b_source, i), errors='surrogate_or_strict')
parsed_this_one = self.parse_source(fullpath, cache=cache)
display.debug(u'parsed %s as %s' % (fullpath, parsed_this_one))
if not parsed:
parsed = parsed_this_one
else:
# left with strings or files, let plugins figure it out
# set so new hosts can use for inventory_file/dir vars
self._inventory.current_source = source
# try source with each plugin
failures = []
for plugin in self._fetch_inventory_plugins():
plugin_name = to_text(getattr(plugin, '_load_name', getattr(plugin, '_original_path', '')))
display.debug(u'Attempting to use plugin %s (%s)' % (plugin_name, plugin._original_path))
# initialize and figure out if plugin wants to attempt parsing this file
try:
plugin_wants = bool(plugin.verify_file(source))
except Exception:
plugin_wants = False
if plugin_wants:
try:
# FIXME in case plugin fails 1/2 way we have partial inventory
plugin.parse(self._inventory, self._loader, source, cache=cache)
try:
plugin.update_cache_if_changed()
except AttributeError:
# some plugins might not implement caching
pass
parsed = True
display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name))
break
except AnsibleParserError as e:
display.debug('%s was not parsable by %s' % (source, plugin_name))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': e, 'tb': tb})
except Exception as e:
display.debug('%s failed while attempting to parse %s' % (plugin_name, source))
tb = ''.join(traceback.format_tb(sys.exc_info()[2]))
failures.append({'src': source, 'plugin': plugin_name, 'exc': AnsibleError(e), 'tb': tb})
else:
display.vvv("%s declined parsing %s as it did not pass its verify_file() method" % (plugin_name, source))
else:
if not parsed and failures:
# only if no plugin processed files should we show errors.
for fail in failures:
display.warning(u'\n* Failed to parse %s with %s plugin: %s' % (to_text(fail['src']), fail['plugin'], to_text(fail['exc'])))
if 'tb' in fail:
display.vvv(to_text(fail['tb']))
if C.INVENTORY_ANY_UNPARSED_IS_FAILED:
raise AnsibleError(u'Completely failed to parse inventory source %s' % (source))
if not parsed:
if source != '/etc/ansible/hosts' or os.path.exists(source):
# only warn if NOT using the default and if using it, only if the file is present
display.warning("Unable to parse %s as an inventory source" % source)
# clear up, jic
self._inventory.current_source = None
return parsed
def clear_caches(self):
''' clear all caches '''
self._hosts_patterns_cache = {}
self._pattern_cache = {}
# FIXME: flush inventory cache
def refresh_inventory(self):
''' recalculate inventory '''
self.clear_caches()
self._inventory = InventoryData()
self.parse_sources(cache=False)
def _match_list(self, items, pattern_str):
# compile patterns
try:
if not pattern_str[0] == '~':
pattern = re.compile(fnmatch.translate(pattern_str))
else:
pattern = re.compile(pattern_str[1:])
except Exception:
raise AnsibleError('Invalid host list pattern: %s' % pattern_str)
# apply patterns
results = []
for item in items:
if pattern.match(item):
results.append(item)
return results
def get_hosts(self, pattern="all", ignore_limits=False, ignore_restrictions=False, order=None):
"""
Takes a pattern or list of patterns and returns a list of matching
inventory host names, taking into account any active restrictions
or applied subsets
"""
hosts = []
# Check if pattern already computed
if isinstance(pattern, list):
pattern_list = pattern[:]
else:
pattern_list = [pattern]
if pattern_list:
if not ignore_limits and self._subset:
pattern_list.extend(self._subset)
if not ignore_restrictions and self._restriction:
pattern_list.extend(self._restriction)
# This is only used as a hash key in the self._hosts_patterns_cache dict
# a tuple is faster than stringifying
pattern_hash = tuple(pattern_list)
if pattern_hash not in self._hosts_patterns_cache:
patterns = split_host_pattern(pattern)
hosts[:] = self._evaluate_patterns(patterns)
# mainly useful for hostvars[host] access
if not ignore_limits and self._subset:
# exclude hosts not in a subset, if defined
subset_uuids = set(s._uuid for s in self._evaluate_patterns(self._subset))
hosts[:] = [h for h in hosts if h._uuid in subset_uuids]
if not ignore_restrictions and self._restriction:
# exclude hosts mentioned in any restriction (ex: failed hosts)
hosts[:] = [h for h in hosts if h.name in self._restriction]
self._hosts_patterns_cache[pattern_hash] = deduplicate_list(hosts)
# sort hosts list if needed (should only happen when called from strategy)
if order in ['sorted', 'reverse_sorted']:
hosts[:] = sorted(self._hosts_patterns_cache[pattern_hash][:], key=attrgetter('name'), reverse=(order == 'reverse_sorted'))
elif order == 'reverse_inventory':
hosts[:] = self._hosts_patterns_cache[pattern_hash][::-1]
else:
hosts[:] = self._hosts_patterns_cache[pattern_hash][:]
if order == 'shuffle':
shuffle(hosts)
elif order not in [None, 'inventory']:
raise AnsibleOptionsError("Invalid 'order' specified for inventory hosts: %s" % order)
return hosts
def _evaluate_patterns(self, patterns):
"""
Takes a list of patterns and returns a list of matching host names,
taking into account any negative and intersection patterns.
"""
patterns = order_patterns(patterns)
hosts = []
for p in patterns:
# avoid resolving a pattern that is a plain host
if p in self._inventory.hosts:
hosts.append(self._inventory.get_host(p))
else:
that = self._match_one_pattern(p)
if p[0] == "!":
that = set(that)
hosts = [h for h in hosts if h not in that]
elif p[0] == "&":
that = set(that)
hosts = [h for h in hosts if h in that]
else:
existing_hosts = set(y.name for y in hosts)
hosts.extend([h for h in that if h.name not in existing_hosts])
return hosts
def _match_one_pattern(self, pattern):
"""
Takes a single pattern and returns a list of matching host names.
Ignores intersection (&) and exclusion (!) specifiers.
The pattern may be:
1. A regex starting with ~, e.g. '~[abc]*'
2. A shell glob pattern with ?/*/[chars]/[!chars], e.g. 'foo*'
3. An ordinary word that matches itself only, e.g. 'foo'
The pattern is matched using the following rules:
1. If it's 'all', it matches all hosts in all groups.
2. Otherwise, for each known group name:
(a) if it matches the group name, the results include all hosts
in the group or any of its children.
(b) otherwise, if it matches any hosts in the group, the results
include the matching hosts.
This means that 'foo*' may match one or more groups (thus including all
hosts therein) but also hosts in other groups.
The built-in groups 'all' and 'ungrouped' are special. No pattern can
match these group names (though 'all' behaves as though it matches, as
described above). The word 'ungrouped' can match a host of that name,
and patterns like 'ungr*' and 'al*' can match either hosts or groups
other than all and ungrouped.
If the pattern matches one or more group names according to these rules,
it may have an optional range suffix to select a subset of the results.
This is allowed only if the pattern is not a regex, i.e. '~foo[1]' does
not work (the [1] is interpreted as part of the regex), but 'foo*[1]'
would work if 'foo*' matched the name of one or more groups.
Duplicate matches are always eliminated from the results.
"""
if pattern[0] in ("&", "!"):
pattern = pattern[1:]
if pattern not in self._pattern_cache:
(expr, slice) = self._split_subscript(pattern)
hosts = self._enumerate_matches(expr)
try:
hosts = self._apply_subscript(hosts, slice)
except IndexError:
raise AnsibleError("No hosts matched the subscripted pattern '%s'" % pattern)
self._pattern_cache[pattern] = hosts
return self._pattern_cache[pattern]
def _split_subscript(self, pattern):
"""
Takes a pattern, checks if it has a subscript, and returns the pattern
without the subscript and a (start,end) tuple representing the given
subscript (or None if there is no subscript).
Validates that the subscript is in the right syntax, but doesn't make
sure the actual indices make sense in context.
"""
# Do not parse regexes for enumeration info
if pattern[0] == '~':
return (pattern, None)
# We want a pattern followed by an integer or range subscript.
# (We can't be more restrictive about the expression because the
# fnmatch semantics permit [\[:\]] to occur.)
subscript = None
m = PATTERN_WITH_SUBSCRIPT.match(pattern)
if m:
(pattern, idx, start, sep, end) = m.groups()
if idx:
subscript = (int(idx), None)
else:
if not end:
end = -1
subscript = (int(start), int(end))
if sep == '-':
display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed")
return (pattern, subscript)
def _apply_subscript(self, hosts, subscript):
"""
Takes a list of hosts and a (start,end) tuple and returns the subset of
hosts based on the subscript (which may be None to return all hosts).
"""
if not hosts or not subscript:
return hosts
(start, end) = subscript
if end:
if end == -1:
end = len(hosts) - 1
return hosts[start:end + 1]
else:
return [hosts[start]]
def _enumerate_matches(self, pattern):
"""
Returns a list of host names matching the given pattern according to the
rules explained above in _match_one_pattern.
"""
results = []
# check if pattern matches group
matching_groups = self._match_list(self._inventory.groups, pattern)
if matching_groups:
for groupname in matching_groups:
results.extend(self._inventory.groups[groupname].get_hosts())
# check hosts if no groups matched or it is a regex/glob pattern
if not matching_groups or pattern[0] == '~' or any(special in pattern for special in ('.', '?', '*', '[')):
# pattern might match host
matching_hosts = self._match_list(self._inventory.hosts, pattern)
if matching_hosts:
for hostname in matching_hosts:
results.append(self._inventory.hosts[hostname])
if not results and pattern in C.LOCALHOST:
# get_host autocreates implicit when needed
implicit = self._inventory.get_host(pattern)
if implicit:
results.append(implicit)
# Display warning if specified host pattern did not match any groups or hosts
if not results and not matching_groups and pattern != 'all':
msg = "Could not match supplied host pattern, ignoring: %s" % pattern
display.debug(msg)
if C.HOST_PATTERN_MISMATCH == 'warning':
display.warning(msg)
elif C.HOST_PATTERN_MISMATCH == 'error':
raise AnsibleError(msg)
# no need to write 'ignore' state
return results
def list_hosts(self, pattern="all"):
""" return a list of hostnames for a pattern """
# FIXME: cache?
result = [h for h in self.get_hosts(pattern)]
# allow implicit localhost if pattern matches and no other results
if len(result) == 0 and pattern in C.LOCALHOST:
result = [pattern]
return result
def list_groups(self):
# FIXME: cache?
return sorted(self._inventory.groups.keys(), key=lambda x: x)
def restrict_to_hosts(self, restriction):
"""
Restrict list operations to the hosts given in restriction. This is used
to batch serial operations in main playbook code, don't use this for other
reasons.
"""
if restriction is None:
return
elif not isinstance(restriction, list):
restriction = [restriction]
self._restriction = set(to_text(h.name) for h in restriction)
def subset(self, subset_pattern):
"""
Limits inventory results to a subset of inventory that matches a given
pattern, such as to select a given geographic of numeric slice amongst
a previous 'hosts' selection that only select roles, or vice versa.
Corresponds to --limit parameter to ansible-playbook
"""
if subset_pattern is None:
self._subset = None
else:
subset_patterns = split_host_pattern(subset_pattern)
results = []
# allow Unix style @filename data
for x in subset_patterns:
if x[0] == "@":
fd = open(x[1:])
results.extend([to_text(l.strip()) for l in fd.read().split("\n")])
fd.close()
else:
results.append(to_text(x))
self._subset = results
def remove_restriction(self):
""" Do not restrict list operations """
self._restriction = None
def clear_pattern_cache(self):
self._pattern_cache = {}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
test/integration/targets/inventory_limit/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
test/integration/targets/inventory_limit/hosts.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
test/integration/targets/inventory_limit/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,964 |
Ansible crashes when `--limit` expression includes superfluous comma
|
##### SUMMARY
Running ansible with `--limit a,` or `--limit a,,b` or `--limit ,` leads to (when `-vvv` is added)
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
the full traceback was:
Traceback (most recent call last):
File ".../ansible/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File ".../ansible/lib/ansible/cli/playbook.py", line 116, in run
CLI.get_host_list(inventory, context.CLIARGS['subset'])
File ".../ansible/lib/ansible/cli/__init__.py", line 487, in get_host_list
inventory.subset(subset)
File ".../ansible/lib/ansible/inventory/manager.py", line 614, in subset
if x[0] == "@":
IndexError: string index out of range
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/inventory/manager.py
##### ANSIBLE VERSION
```
2.9.0
```
Also happens with `devel`, and potentially older ansible versions.
##### STEPS TO REPRODUCE
Run `ansible-playbook playbook.yml --limit a,` for an existing playbook `playbook.yml` (doesn't matter what it actually does, and also does not matter whether you use `a` or a real host name). Same result with `--limit a,,b` or similar expressions where `.split(',')` will lead to an empty string in the resulting list.
##### EXPECTED RESULTS
Ansible complaining about syntax, or simply ignoring the trailing comma / empty parts of the `.split(',')`.
##### ACTUAL RESULTS
```
ERROR! Unexpected Exception, this is probably a bug: string index out of range
to see the full traceback, use -vvv
```
|
https://github.com/ansible/ansible/issues/61964
|
https://github.com/ansible/ansible/pull/62442
|
8d0c193b251051e459513614e6f7212ecf9f0922
|
987265a6ef920466b44625097ff85f3e845dc8ea
| 2019-09-07T19:59:24Z |
python
| 2019-09-20T20:03:51Z |
test/units/plugins/inventory/test_inventory.py
|
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import string
import textwrap
from ansible import constants as C
from units.compat import mock
from units.compat import unittest
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from units.mock.path import mock_unfrackpath_noop
from ansible.inventory.manager import InventoryManager, split_host_pattern
from units.mock.loader import DictDataLoader
class TestInventory(unittest.TestCase):
patterns = {
'a': ['a'],
'a, b': ['a', 'b'],
'a , b': ['a', 'b'],
' a,b ,c[1:2] ': ['a', 'b', 'c[1:2]'],
'9a01:7f8:191:7701::9': ['9a01:7f8:191:7701::9'],
'9a01:7f8:191:7701::9,9a01:7f8:191:7701::9': ['9a01:7f8:191:7701::9', '9a01:7f8:191:7701::9'],
'9a01:7f8:191:7701::9,9a01:7f8:191:7701::9,foo': ['9a01:7f8:191:7701::9', '9a01:7f8:191:7701::9', 'foo'],
'foo[1:2]': ['foo[1:2]'],
'a::b': ['a::b'],
'a:b': ['a', 'b'],
' a : b ': ['a', 'b'],
'foo:bar:baz[1:2]': ['foo', 'bar', 'baz[1:2]'],
}
pattern_lists = [
[['a'], ['a']],
[['a', 'b'], ['a', 'b']],
[['a, b'], ['a', 'b']],
[['9a01:7f8:191:7701::9', '9a01:7f8:191:7701::9,foo'],
['9a01:7f8:191:7701::9', '9a01:7f8:191:7701::9', 'foo']]
]
# pattern_string: [ ('base_pattern', (a,b)), ['x','y','z'] ]
# a,b are the bounds of the subscript; x..z are the results of the subscript
# when applied to string.ascii_letters.
subscripts = {
'a': [('a', None), list(string.ascii_letters)],
'a[0]': [('a', (0, None)), ['a']],
'a[1]': [('a', (1, None)), ['b']],
'a[2:3]': [('a', (2, 3)), ['c', 'd']],
'a[-1]': [('a', (-1, None)), ['Z']],
'a[-2]': [('a', (-2, None)), ['Y']],
'a[48:]': [('a', (48, -1)), ['W', 'X', 'Y', 'Z']],
'a[49:]': [('a', (49, -1)), ['X', 'Y', 'Z']],
'a[1:]': [('a', (1, -1)), list(string.ascii_letters[1:])],
}
ranges_to_expand = {
'a[1:2]': ['a1', 'a2'],
'a[1:10:2]': ['a1', 'a3', 'a5', 'a7', 'a9'],
'a[a:b]': ['aa', 'ab'],
'a[a:i:3]': ['aa', 'ad', 'ag'],
'a[a:b][c:d]': ['aac', 'aad', 'abc', 'abd'],
'a[0:1][2:3]': ['a02', 'a03', 'a12', 'a13'],
'a[a:b][2:3]': ['aa2', 'aa3', 'ab2', 'ab3'],
}
def setUp(self):
fake_loader = DictDataLoader({})
self.i = InventoryManager(loader=fake_loader, sources=[None])
def test_split_patterns(self):
for p in self.patterns:
r = self.patterns[p]
self.assertEqual(r, split_host_pattern(p))
for p, r in self.pattern_lists:
self.assertEqual(r, split_host_pattern(p))
def test_ranges(self):
for s in self.subscripts:
r = self.subscripts[s]
self.assertEqual(r[0], self.i._split_subscript(s))
self.assertEqual(
r[1],
self.i._apply_subscript(
list(string.ascii_letters),
r[0][1]
)
)
class TestInventoryPlugins(unittest.TestCase):
def test_empty_inventory(self):
inventory = self._get_inventory('')
self.assertIn('all', inventory.groups)
self.assertIn('ungrouped', inventory.groups)
self.assertFalse(inventory.groups['all'].get_hosts())
self.assertFalse(inventory.groups['ungrouped'].get_hosts())
def test_ini(self):
self._test_default_groups("""
host1
host2
host3
[servers]
host3
host4
host5
""")
def test_ini_explicit_ungrouped(self):
self._test_default_groups("""
[ungrouped]
host1
host2
host3
[servers]
host3
host4
host5
""")
def test_ini_variables_stringify(self):
values = ['string', 'no', 'No', 'false', 'FALSE', [], False, 0]
inventory_content = "host1 "
inventory_content += ' '.join(['var%s=%s' % (i, to_text(x)) for i, x in enumerate(values)])
inventory = self._get_inventory(inventory_content)
variables = inventory.get_host('host1').vars
for i in range(len(values)):
if isinstance(values[i], string_types):
self.assertIsInstance(variables['var%s' % i], string_types)
else:
self.assertIsInstance(variables['var%s' % i], type(values[i]))
@mock.patch('ansible.inventory.manager.unfrackpath', mock_unfrackpath_noop)
@mock.patch('os.path.exists', lambda x: True)
@mock.patch('os.access', lambda x, y: True)
def test_yaml_inventory(self, filename="test.yaml"):
inventory_content = {filename: textwrap.dedent("""\
---
all:
hosts:
test1:
test2:
""")}
C.INVENTORY_ENABLED = ['yaml']
fake_loader = DictDataLoader(inventory_content)
im = InventoryManager(loader=fake_loader, sources=filename)
self.assertTrue(im._inventory.hosts)
self.assertIn('test1', im._inventory.hosts)
self.assertIn('test2', im._inventory.hosts)
self.assertIn(im._inventory.get_host('test1'), im._inventory.groups['all'].hosts)
self.assertIn(im._inventory.get_host('test2'), im._inventory.groups['all'].hosts)
self.assertEqual(len(im._inventory.groups['all'].hosts), 2)
self.assertIn(im._inventory.get_host('test1'), im._inventory.groups['ungrouped'].hosts)
self.assertIn(im._inventory.get_host('test2'), im._inventory.groups['ungrouped'].hosts)
self.assertEqual(len(im._inventory.groups['ungrouped'].hosts), 2)
def _get_inventory(self, inventory_content):
fake_loader = DictDataLoader({__file__: inventory_content})
return InventoryManager(loader=fake_loader, sources=[__file__])
def _test_default_groups(self, inventory_content):
inventory = self._get_inventory(inventory_content)
self.assertIn('all', inventory.groups)
self.assertIn('ungrouped', inventory.groups)
all_hosts = set(host.name for host in inventory.groups['all'].get_hosts())
self.assertEqual(set(['host1', 'host2', 'host3', 'host4', 'host5']), all_hosts)
ungrouped_hosts = set(host.name for host in inventory.groups['ungrouped'].get_hosts())
self.assertEqual(set(['host1', 'host2']), ungrouped_hosts)
servers_hosts = set(host.name for host in inventory.groups['servers'].get_hosts())
self.assertEqual(set(['host3', 'host4', 'host5']), servers_hosts)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,484 |
Error "TypeError: 'in <string>' requires string as left operand, not bytes" in docker_login
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When logging out from any Repository the play aborts with Message "TypeError: 'in <string>' requires string as left operand, not bytes". Our Ansible is using Python 3!
I found that the docker_login is searching for Bytes in the output of the 'docker logout..' command (e.g. _elif b'Removing login credentials for ' in out_). I changed this in my local Ansible code to strings to fix that. I cannot verify this with Python 2.x
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/app/nondocker/ansible/ansible.cfg
configured module search path = ['/home/app/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/app/containervol/python/python3/lib/python3.7/site-packages/ansible
executable location = /home/app/containervol/python/python3/bin/ansible
python version = 3.7.4 (default, Jul 30 2019, 19:56:38) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/app/nondocker/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/home/app/nondocker/SVN/trunk/Deploy_Artefakte/ansible/inventory.yml']
DEFAULT_LOG_PATH(/home/app/nondocker/ansible/ansible.cfg) = /home/app/nondocker/ansible/logs/ansible.log
DEFAULT_MANAGED_STR(/home/app/nondocker/ansible/ansible.cfg) = Ansible managed file - do not make changes here - they will be overwritten during next deployment!
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "log out of docker-hub-remote.bahnhub.tech.rz.db.de:443"
docker_login:
registry: 'docker-hub-remote.bahnhub.tech.rz.db.de:443'
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Successful logout of the Docker Repository.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
2019-09-17 17:21:07,741 p=27466 u=app | <elokfiku-tst-app.dbv2-test.comp.db.de> EXEC /bin/sh -c '/home/app/containervol/python/python3/bin/python /home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py && sleep 0'
2019-09-17 17:21:08,101 p=27466 u=app | fatal: [test_app]: FAILED! => {
"changed": false,
"module_stderr": "/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py:68: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\n import imp\nTraceback (most recent call last):\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 262, in <module>\n _ansiballz_main()\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 252, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 120, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/home/app/containervol/python/python3/lib64/python3.7/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/home/app/containervol/python/python3/lib64/python3.7/imp.py\", line 169, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 630, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 354, in <module>\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 343, in main\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 171, in __init__\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 227, in logout\nTypeError: 'in <string>' requires string as left operand, not bytes\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62484
|
https://github.com/ansible/ansible/pull/62621
|
a7b239708e961469b2c96451ec3c7df6e7b1a225
|
2e5137078d33db1b83f343edc0bf81fd4258f140
| 2019-09-18T08:43:23Z |
python
| 2019-09-21T13:13:31Z |
changelogs/fragments/62621-docker_login-fix-60381.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,484 |
Error "TypeError: 'in <string>' requires string as left operand, not bytes" in docker_login
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When logging out from any Repository the play aborts with Message "TypeError: 'in <string>' requires string as left operand, not bytes". Our Ansible is using Python 3!
I found that the docker_login is searching for Bytes in the output of the 'docker logout..' command (e.g. _elif b'Removing login credentials for ' in out_). I changed this in my local Ansible code to strings to fix that. I cannot verify this with Python 2.x
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/app/nondocker/ansible/ansible.cfg
configured module search path = ['/home/app/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/app/containervol/python/python3/lib/python3.7/site-packages/ansible
executable location = /home/app/containervol/python/python3/bin/ansible
python version = 3.7.4 (default, Jul 30 2019, 19:56:38) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/home/app/nondocker/ansible/ansible.cfg) = True
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['/home/app/nondocker/SVN/trunk/Deploy_Artefakte/ansible/inventory.yml']
DEFAULT_LOG_PATH(/home/app/nondocker/ansible/ansible.cfg) = /home/app/nondocker/ansible/logs/ansible.log
DEFAULT_MANAGED_STR(/home/app/nondocker/ansible/ansible.cfg) = Ansible managed file - do not make changes here - they will be overwritten during next deployment!
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "log out of docker-hub-remote.bahnhub.tech.rz.db.de:443"
docker_login:
registry: 'docker-hub-remote.bahnhub.tech.rz.db.de:443'
state: absent
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Successful logout of the Docker Repository.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
2019-09-17 17:21:07,741 p=27466 u=app | <elokfiku-tst-app.dbv2-test.comp.db.de> EXEC /bin/sh -c '/home/app/containervol/python/python3/bin/python /home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py && sleep 0'
2019-09-17 17:21:08,101 p=27466 u=app | fatal: [test_app]: FAILED! => {
"changed": false,
"module_stderr": "/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py:68: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\n import imp\nTraceback (most recent call last):\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 262, in <module>\n _ansiballz_main()\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 252, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/app/.ansible/tmp/ansible-tmp-1568733667.5645123-182000990462949/AnsiballZ_docker_login.py\", line 120, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/home/app/containervol/python/python3/lib64/python3.7/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n File \"/home/app/containervol/python/python3/lib64/python3.7/imp.py\", line 169, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 630, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 354, in <module>\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 343, in main\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 171, in __init__\n File \"/tmp/ansible_docker_login_payload_enwvm0yw/__main__.py\", line 227, in logout\nTypeError: 'in <string>' requires string as left operand, not bytes\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62484
|
https://github.com/ansible/ansible/pull/62621
|
a7b239708e961469b2c96451ec3c7df6e7b1a225
|
2e5137078d33db1b83f343edc0bf81fd4258f140
| 2019-09-18T08:43:23Z |
python
| 2019-09-21T13:13:31Z |
lib/ansible/modules/cloud/docker/docker_login.py
|
#!/usr/bin/python
#
# (c) 2016 Olaf Kilian <[email protected]>
# Chris Houseknecht, <[email protected]>
# James Tanner, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_login
short_description: Log into a Docker registry.
version_added: "2.0"
description:
- Provides functionality similar to the "docker login" command.
- Authenticate with a docker registry and add the credentials to your local Docker config file. Adding the
credentials to the config files allows future connections to the registry using tools such as Ansible's Docker
modules, the Docker CLI and Docker SDK for Python without needing to provide credentials.
- Running in check mode will perform the authentication without updating the config file.
options:
registry_url:
required: False
description:
- The registry URL.
type: str
default: "https://index.docker.io/v1/"
aliases:
- registry
- url
username:
description:
- The username for the registry account
type: str
required: yes
password:
description:
- The plaintext password for the registry account
type: str
required: yes
email:
required: False
description:
- "The email address for the registry account."
type: str
reauthorize:
description:
- Refresh existing authentication found in the configuration file.
type: bool
default: no
aliases:
- reauth
config_path:
description:
- Custom path to the Docker CLI configuration file.
type: path
default: ~/.docker/config.json
aliases:
- dockercfg_path
state:
version_added: '2.3'
description:
- This controls the current state of the user. C(present) will login in a user, C(absent) will log them out.
- To logout you only need the registry server, which defaults to DockerHub.
- Before 2.1 you could ONLY log in.
- Docker does not support 'logout' with a custom config file.
type: str
default: 'present'
choices: ['present', 'absent']
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
- "Only to be able to logout, that is for I(state) = C(absent): the C(docker) command line utility"
author:
- Olaf Kilian (@olsaki) <[email protected]>
- Chris Houseknecht (@chouseknecht)
'''
EXAMPLES = '''
- name: Log into DockerHub
docker_login:
username: docker
password: rekcod
- name: Log into private registry and force re-authorization
docker_login:
registry: your.private.registry.io
username: yourself
password: secrets3
reauthorize: yes
- name: Log into DockerHub using a custom config file
docker_login:
username: docker
password: rekcod
config_path: /tmp/.mydockercfg
- name: Log out of DockerHub
docker_login:
state: absent
'''
RETURN = '''
login_results:
description: Results from the login.
returned: when state='present'
type: dict
sample: {
"email": "[email protected]",
"serveraddress": "localhost:5000",
"username": "testuser"
}
'''
import base64
import json
import os
import re
import traceback
try:
from docker.errors import DockerException
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DEFAULT_DOCKER_REGISTRY,
DockerBaseClass,
EMAIL_REGEX,
RequestException,
)
class LoginManager(DockerBaseClass):
def __init__(self, client, results):
super(LoginManager, self).__init__()
self.client = client
self.results = results
parameters = self.client.module.params
self.check_mode = self.client.check_mode
self.registry_url = parameters.get('registry_url')
self.username = parameters.get('username')
self.password = parameters.get('password')
self.email = parameters.get('email')
self.reauthorize = parameters.get('reauthorize')
self.config_path = parameters.get('config_path')
if parameters['state'] == 'present':
self.login()
else:
self.logout()
def fail(self, msg):
self.client.fail(msg)
def login(self):
'''
Log into the registry with provided username/password. On success update the config
file with the new authorization.
:return: None
'''
if self.email and not re.match(EMAIL_REGEX, self.email):
self.fail("Parameter error: the email address appears to be incorrect. Expecting it to match "
"/%s/" % (EMAIL_REGEX))
self.results['actions'].append("Logged into %s" % (self.registry_url))
self.log("Log into %s with username %s" % (self.registry_url, self.username))
try:
response = self.client.login(
self.username,
password=self.password,
email=self.email,
registry=self.registry_url,
reauth=self.reauthorize,
dockercfg_path=self.config_path
)
except Exception as exc:
self.fail("Logging into %s for user %s failed - %s" % (self.registry_url, self.username, str(exc)))
# If user is already logged in, then response contains password for user
# This returns correct password if user is logged in and wrong password is given.
if 'password' in response:
del response['password']
self.results['login_result'] = response
if not self.check_mode:
self.update_config_file()
def logout(self):
'''
Log out of the registry. On success update the config file.
TODO: port to API once docker.py supports this.
:return: None
'''
cmd = [self.client.module.get_bin_path('docker', True), "logout", self.registry_url]
# TODO: docker does not support config file in logout, restore this when they do
# if self.config_path and self.config_file_exists(self.config_path):
# cmd.extend(["--config", self.config_path])
(rc, out, err) = self.client.module.run_command(cmd)
if rc != 0:
self.fail("Could not log out: %s" % err)
if b'Not logged in to ' in out:
self.results['changed'] = False
elif b'Removing login credentials for ' in out:
self.results['changed'] = True
else:
self.client.module.warn('Unable to determine whether logout was successful.')
# Adding output to actions, so that user can inspect what was actually returned
self.results['actions'].append(to_text(out))
def config_file_exists(self, path):
if os.path.exists(path):
self.log("Configuration file %s exists" % (path))
return True
self.log("Configuration file %s not found." % (path))
return False
def create_config_file(self, path):
'''
Create a config file with a JSON blob containing an auths key.
:return: None
'''
self.log("Creating docker config file %s" % (path))
config_path_dir = os.path.dirname(path)
if not os.path.exists(config_path_dir):
try:
os.makedirs(config_path_dir)
except Exception as exc:
self.fail("Error: failed to create %s - %s" % (config_path_dir, str(exc)))
self.write_config(path, dict(auths=dict()))
def write_config(self, path, config):
try:
json.dump(config, open(path, "w"), indent=5, sort_keys=True)
except Exception as exc:
self.fail("Error: failed to write config to %s - %s" % (path, str(exc)))
def update_config_file(self):
'''
If the authorization not stored in the config file or reauthorize is True,
update the config file with the new authorization.
:return: None
'''
path = self.config_path
if not self.config_file_exists(path):
self.create_config_file(path)
try:
# read the existing config
config = json.load(open(path, "r"))
except ValueError:
self.log("Error reading config from %s" % (path))
config = dict()
if not config.get('auths'):
self.log("Adding auths dict to config.")
config['auths'] = dict()
if not config['auths'].get(self.registry_url):
self.log("Adding registry_url %s to auths." % (self.registry_url))
config['auths'][self.registry_url] = dict()
b64auth = base64.b64encode(
to_bytes(self.username) + b':' + to_bytes(self.password)
)
auth = to_text(b64auth)
encoded_credentials = dict(
auth=auth,
email=self.email
)
if config['auths'][self.registry_url] != encoded_credentials or self.reauthorize:
# Update the config file with the new authorization
config['auths'][self.registry_url] = encoded_credentials
self.log("Updating config file %s with new authorization for %s" % (path, self.registry_url))
self.results['actions'].append("Updated config file %s with new authorization for %s" % (
path, self.registry_url))
self.results['changed'] = True
self.write_config(path, config)
def main():
argument_spec = dict(
registry_url=dict(type='str', default=DEFAULT_DOCKER_REGISTRY, aliases=['registry', 'url']),
username=dict(type='str'),
password=dict(type='str', no_log=True),
email=dict(type='str'),
reauthorize=dict(type='bool', default=False, aliases=['reauth']),
state=dict(type='str', default='present', choices=['present', 'absent']),
config_path=dict(type='path', default='~/.docker/config.json', aliases=['dockercfg_path']),
)
required_if = [
('state', 'present', ['username', 'password']),
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=required_if,
min_docker_api_version='1.20',
)
try:
results = dict(
changed=False,
actions=[],
login_result={}
)
LoginManager(client, results)
if 'actions' in results:
del results['actions']
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,232 |
Docker logout reports no change even though it did something
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`docker_login` reports no change even though it logged out.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`docker_login`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/raphael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: Ubuntu 18.04 LTS 4.18.0-24-generic, ansible 2.8.1
Server: RHE7 3.10.0-957.21.3.el7.x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Pull some Docker images
block:
- name: Log into Harbor (Docker registry)
docker_login:
registry: "{{docker_registry}}"
username: "{{harbor_user}}"
password: "{{harbor_password}}"
reauthorize: yes
- name: Pull Docker images
loop:
- "some_image"
- "other_image"
docker_image:
name: "{{ item }}"
state: present
source: pull
always:
- name: Docker logout
docker_login:
registry: "{{docker_registry}}"
state: absent
```
##### EXPECTED RESULTS
Task `Docker logout` should report changes.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gears : Log into Harbor (Docker registry)]
<snip>
changed: [<snip>] => {
"changed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/<snip>/.docker/config.json",
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"email": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": true,
"registry": "<snip>",
"registry_url": "<snip>,
"ssl_version": null,
"state": "present",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": "<snip>",
"validate_certs": false
}
},
"login_result": {
"IdentityToken": "",
"Status": "Login Succeeded"
}
}
<snip>
TASK [gears : Pull Docker images]
ok:
<snip>
TASK [gears : Docker logout]
<snip>
ok: [<snip>] => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/<snip>/.docker/config.json",
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"email": null,
"password": null,
"reauthorize": false,
"registry": "<snip>",
"registry_url": "<snip>",
"ssl_version": null,
"state": "absent",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": null,
"validate_certs": false
}
},
"login_result": {}
}
```
But, on that host:
```
$ cat .docker/config.json | grep auth
"auths": {}
```
When I remove the logout task, the login appears in that output. So logout _is_ effective.
|
https://github.com/ansible/ansible/issues/59232
|
https://github.com/ansible/ansible/pull/62621
|
a7b239708e961469b2c96451ec3c7df6e7b1a225
|
2e5137078d33db1b83f343edc0bf81fd4258f140
| 2019-07-18T09:00:50Z |
python
| 2019-09-21T13:13:31Z |
changelogs/fragments/62621-docker_login-fix-60381.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,232 |
Docker logout reports no change even though it did something
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`docker_login` reports no change even though it logged out.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`docker_login`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/raphael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Client: Ubuntu 18.04 LTS 4.18.0-24-generic, ansible 2.8.1
Server: RHE7 3.10.0-957.21.3.el7.x86_64
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Pull some Docker images
block:
- name: Log into Harbor (Docker registry)
docker_login:
registry: "{{docker_registry}}"
username: "{{harbor_user}}"
password: "{{harbor_password}}"
reauthorize: yes
- name: Pull Docker images
loop:
- "some_image"
- "other_image"
docker_image:
name: "{{ item }}"
state: present
source: pull
always:
- name: Docker logout
docker_login:
registry: "{{docker_registry}}"
state: absent
```
##### EXPECTED RESULTS
Task `Docker logout` should report changes.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gears : Log into Harbor (Docker registry)]
<snip>
changed: [<snip>] => {
"changed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/<snip>/.docker/config.json",
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"email": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": true,
"registry": "<snip>",
"registry_url": "<snip>,
"ssl_version": null,
"state": "present",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": "<snip>",
"validate_certs": false
}
},
"login_result": {
"IdentityToken": "",
"Status": "Login Succeeded"
}
}
<snip>
TASK [gears : Pull Docker images]
ok:
<snip>
TASK [gears : Docker logout]
<snip>
ok: [<snip>] => {
"changed": false,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/<snip>/.docker/config.json",
"debug": false,
"docker_host": "unix://var/run/docker.sock",
"email": null,
"password": null,
"reauthorize": false,
"registry": "<snip>",
"registry_url": "<snip>",
"ssl_version": null,
"state": "absent",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": null,
"validate_certs": false
}
},
"login_result": {}
}
```
But, on that host:
```
$ cat .docker/config.json | grep auth
"auths": {}
```
When I remove the logout task, the login appears in that output. So logout _is_ effective.
|
https://github.com/ansible/ansible/issues/59232
|
https://github.com/ansible/ansible/pull/62621
|
a7b239708e961469b2c96451ec3c7df6e7b1a225
|
2e5137078d33db1b83f343edc0bf81fd4258f140
| 2019-07-18T09:00:50Z |
python
| 2019-09-21T13:13:31Z |
lib/ansible/modules/cloud/docker/docker_login.py
|
#!/usr/bin/python
#
# (c) 2016 Olaf Kilian <[email protected]>
# Chris Houseknecht, <[email protected]>
# James Tanner, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_login
short_description: Log into a Docker registry.
version_added: "2.0"
description:
- Provides functionality similar to the "docker login" command.
- Authenticate with a docker registry and add the credentials to your local Docker config file. Adding the
credentials to the config files allows future connections to the registry using tools such as Ansible's Docker
modules, the Docker CLI and Docker SDK for Python without needing to provide credentials.
- Running in check mode will perform the authentication without updating the config file.
options:
registry_url:
required: False
description:
- The registry URL.
type: str
default: "https://index.docker.io/v1/"
aliases:
- registry
- url
username:
description:
- The username for the registry account
type: str
required: yes
password:
description:
- The plaintext password for the registry account
type: str
required: yes
email:
required: False
description:
- "The email address for the registry account."
type: str
reauthorize:
description:
- Refresh existing authentication found in the configuration file.
type: bool
default: no
aliases:
- reauth
config_path:
description:
- Custom path to the Docker CLI configuration file.
type: path
default: ~/.docker/config.json
aliases:
- dockercfg_path
state:
version_added: '2.3'
description:
- This controls the current state of the user. C(present) will login in a user, C(absent) will log them out.
- To logout you only need the registry server, which defaults to DockerHub.
- Before 2.1 you could ONLY log in.
- Docker does not support 'logout' with a custom config file.
type: str
default: 'present'
choices: ['present', 'absent']
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
- "Only to be able to logout, that is for I(state) = C(absent): the C(docker) command line utility"
author:
- Olaf Kilian (@olsaki) <[email protected]>
- Chris Houseknecht (@chouseknecht)
'''
EXAMPLES = '''
- name: Log into DockerHub
docker_login:
username: docker
password: rekcod
- name: Log into private registry and force re-authorization
docker_login:
registry: your.private.registry.io
username: yourself
password: secrets3
reauthorize: yes
- name: Log into DockerHub using a custom config file
docker_login:
username: docker
password: rekcod
config_path: /tmp/.mydockercfg
- name: Log out of DockerHub
docker_login:
state: absent
'''
RETURN = '''
login_results:
description: Results from the login.
returned: when state='present'
type: dict
sample: {
"email": "[email protected]",
"serveraddress": "localhost:5000",
"username": "testuser"
}
'''
import base64
import json
import os
import re
import traceback
try:
from docker.errors import DockerException
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DEFAULT_DOCKER_REGISTRY,
DockerBaseClass,
EMAIL_REGEX,
RequestException,
)
class LoginManager(DockerBaseClass):
def __init__(self, client, results):
super(LoginManager, self).__init__()
self.client = client
self.results = results
parameters = self.client.module.params
self.check_mode = self.client.check_mode
self.registry_url = parameters.get('registry_url')
self.username = parameters.get('username')
self.password = parameters.get('password')
self.email = parameters.get('email')
self.reauthorize = parameters.get('reauthorize')
self.config_path = parameters.get('config_path')
if parameters['state'] == 'present':
self.login()
else:
self.logout()
def fail(self, msg):
self.client.fail(msg)
def login(self):
'''
Log into the registry with provided username/password. On success update the config
file with the new authorization.
:return: None
'''
if self.email and not re.match(EMAIL_REGEX, self.email):
self.fail("Parameter error: the email address appears to be incorrect. Expecting it to match "
"/%s/" % (EMAIL_REGEX))
self.results['actions'].append("Logged into %s" % (self.registry_url))
self.log("Log into %s with username %s" % (self.registry_url, self.username))
try:
response = self.client.login(
self.username,
password=self.password,
email=self.email,
registry=self.registry_url,
reauth=self.reauthorize,
dockercfg_path=self.config_path
)
except Exception as exc:
self.fail("Logging into %s for user %s failed - %s" % (self.registry_url, self.username, str(exc)))
# If user is already logged in, then response contains password for user
# This returns correct password if user is logged in and wrong password is given.
if 'password' in response:
del response['password']
self.results['login_result'] = response
if not self.check_mode:
self.update_config_file()
def logout(self):
'''
Log out of the registry. On success update the config file.
TODO: port to API once docker.py supports this.
:return: None
'''
cmd = [self.client.module.get_bin_path('docker', True), "logout", self.registry_url]
# TODO: docker does not support config file in logout, restore this when they do
# if self.config_path and self.config_file_exists(self.config_path):
# cmd.extend(["--config", self.config_path])
(rc, out, err) = self.client.module.run_command(cmd)
if rc != 0:
self.fail("Could not log out: %s" % err)
if b'Not logged in to ' in out:
self.results['changed'] = False
elif b'Removing login credentials for ' in out:
self.results['changed'] = True
else:
self.client.module.warn('Unable to determine whether logout was successful.')
# Adding output to actions, so that user can inspect what was actually returned
self.results['actions'].append(to_text(out))
def config_file_exists(self, path):
if os.path.exists(path):
self.log("Configuration file %s exists" % (path))
return True
self.log("Configuration file %s not found." % (path))
return False
def create_config_file(self, path):
'''
Create a config file with a JSON blob containing an auths key.
:return: None
'''
self.log("Creating docker config file %s" % (path))
config_path_dir = os.path.dirname(path)
if not os.path.exists(config_path_dir):
try:
os.makedirs(config_path_dir)
except Exception as exc:
self.fail("Error: failed to create %s - %s" % (config_path_dir, str(exc)))
self.write_config(path, dict(auths=dict()))
def write_config(self, path, config):
try:
json.dump(config, open(path, "w"), indent=5, sort_keys=True)
except Exception as exc:
self.fail("Error: failed to write config to %s - %s" % (path, str(exc)))
def update_config_file(self):
'''
If the authorization not stored in the config file or reauthorize is True,
update the config file with the new authorization.
:return: None
'''
path = self.config_path
if not self.config_file_exists(path):
self.create_config_file(path)
try:
# read the existing config
config = json.load(open(path, "r"))
except ValueError:
self.log("Error reading config from %s" % (path))
config = dict()
if not config.get('auths'):
self.log("Adding auths dict to config.")
config['auths'] = dict()
if not config['auths'].get(self.registry_url):
self.log("Adding registry_url %s to auths." % (self.registry_url))
config['auths'][self.registry_url] = dict()
b64auth = base64.b64encode(
to_bytes(self.username) + b':' + to_bytes(self.password)
)
auth = to_text(b64auth)
encoded_credentials = dict(
auth=auth,
email=self.email
)
if config['auths'][self.registry_url] != encoded_credentials or self.reauthorize:
# Update the config file with the new authorization
config['auths'][self.registry_url] = encoded_credentials
self.log("Updating config file %s with new authorization for %s" % (path, self.registry_url))
self.results['actions'].append("Updated config file %s with new authorization for %s" % (
path, self.registry_url))
self.results['changed'] = True
self.write_config(path, config)
def main():
argument_spec = dict(
registry_url=dict(type='str', default=DEFAULT_DOCKER_REGISTRY, aliases=['registry', 'url']),
username=dict(type='str'),
password=dict(type='str', no_log=True),
email=dict(type='str'),
reauthorize=dict(type='bool', default=False, aliases=['reauth']),
state=dict(type='str', default='present', choices=['present', 'absent']),
config_path=dict(type='path', default='~/.docker/config.json', aliases=['dockercfg_path']),
)
required_if = [
('state', 'present', ['username', 'password']),
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=required_if,
min_docker_api_version='1.20',
)
try:
results = dict(
changed=False,
actions=[],
login_result={}
)
LoginManager(client, results)
if 'actions' in results:
del results['actions']
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,780 |
Add documentation for the network resource model builder
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Add how to use the resource model builder to the network developer guide.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61780
|
https://github.com/ansible/ansible/pull/62222
|
fbf182c3690d54de98e722c90e5f6ce273ffccba
|
b17581a3075f571ed5b48126282e086a6efa30cc
| 2019-09-04T15:15:25Z |
python
| 2019-09-23T15:10:05Z |
docs/docsite/rst/network/dev_guide/developing_resource_modules_network.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,780 |
Add documentation for the network resource model builder
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Add how to use the resource model builder to the network developer guide.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61780
|
https://github.com/ansible/ansible/pull/62222
|
fbf182c3690d54de98e722c90e5f6ce273ffccba
|
b17581a3075f571ed5b48126282e086a6efa30cc
| 2019-09-04T15:15:25Z |
python
| 2019-09-23T15:10:05Z |
docs/docsite/rst/network/dev_guide/index.rst
|
.. _network_developer_guide:
**************************************
Developer Guide for Network Automation
**************************************
Welcome to the Developer Guide for Ansible Network Automation!
**Who should use this guide?**
If you want to extend Ansible for Network Automation by creating a module or plugin, this guide is for you. This guide is specific to networking. You should already be familiar with how to create, test, and document modules and plugins, as well as the prerequisites for getting your module or plugin accepted into the main Ansible repository. See the :ref:`developer_guide` for details. Before you proceed, please read:
* How to :ref:`add a custom plugin or module locally <developing_locally>`.
* How to figure out if :ref:`developing a module is the right approach <module_dev_should_you>` for my use case.
* How to :ref:`set up my Python development environment <environment_setup>`.
* How to :ref:`get started writing a module <developing_modules_general>`.
Find the network developer task that best describes what you want to do:
* I want to :ref:`develop a network connection plugin <developing_plugins_network>`.
* I want to :ref:`document my set of modules for a network platform <documenting_modules_network>`.
If you prefer to read the entire guide, here's a list of the pages in order.
.. toctree::
:maxdepth: 1
developing_plugins_network
documenting_modules_network
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,411 |
Allow to handle 'Not supported powershell version'
|
If i am blind pls close.
##### SUMMARY
I am working in environment where i may still catch some old w2k3 hosts which does not support Powershell > 2 - i those cases i would like to avoid processing and be able to handle error. Currently if old version of PoSH ins reached playbook will just fail not allowing me to handle it even with block + rescue.
In init part:
```
if($PSVersionTable.PSVersion.Major -lt 3)
{
write-host "Unsupported PoSH version"
exit
}
```
This can later be easily adjusted when requiring higher versions
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
`lib/ansible/plugins/connection/winrm.py`
|
https://github.com/ansible/ansible/issues/62411
|
https://github.com/ansible/ansible/pull/62634
|
77115247180e0dcb1178bed8654a7a6264bfa476
|
d4ec9422a3159230b3b56dce5059ee3250bf800c
| 2019-09-17T12:56:25Z |
python
| 2019-09-24T12:43:14Z |
changelogs/fragments/pwsh-minimum.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,411 |
Allow to handle 'Not supported powershell version'
|
If i am blind pls close.
##### SUMMARY
I am working in environment where i may still catch some old w2k3 hosts which does not support Powershell > 2 - i those cases i would like to avoid processing and be able to handle error. Currently if old version of PoSH ins reached playbook will just fail not allowing me to handle it even with block + rescue.
In init part:
```
if($PSVersionTable.PSVersion.Major -lt 3)
{
write-host "Unsupported PoSH version"
exit
}
```
This can later be easily adjusted when requiring higher versions
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
`lib/ansible/plugins/connection/winrm.py`
|
https://github.com/ansible/ansible/issues/62411
|
https://github.com/ansible/ansible/pull/62634
|
77115247180e0dcb1178bed8654a7a6264bfa476
|
d4ec9422a3159230b3b56dce5059ee3250bf800c
| 2019-09-17T12:56:25Z |
python
| 2019-09-24T12:43:14Z |
lib/ansible/executor/powershell/bootstrap_wrapper.ps1
|
&chcp.com 65001 > $null
$exec_wrapper_str = $input | Out-String
$split_parts = $exec_wrapper_str.Split(@("`0`0`0`0"), 2, [StringSplitOptions]::RemoveEmptyEntries)
If (-not $split_parts.Length -eq 2) { throw "invalid payload" }
Set-Variable -Name json_raw -Value $split_parts[1]
$exec_wrapper = [ScriptBlock]::Create($split_parts[0])
&$exec_wrapper
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,613 |
win_get_url fails with empty string proxy parameters
|
##### SUMMARY
We have to support public and private environments where we may or may not have a proxy, so we configure tasks with proxy conditional on a `use_proxy` variable in our roles. After upgrading to 2.8 the `win_get_url` module fails if the proxy username and password paramters have an empty string value. Same code worked in 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 8 2019, 18:17:52) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_LOCAL_TMP(/etc/ansible/ansible.cfg) = /tmp/ansible-$USER/ansible-local-1013gsipsbln
TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = false
```
##### OS / ENVIRONMENT
Docker container based on `runatlantis/atlantis:v0.8.2`.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- debug:
msg:
- "proxy: {{ proxy }}"
- "proxy_password: {{ proxy_password is defined | ternary('<masked>', '<undefined>') }}"
- "proxy_username: {{ proxy_username is defined | ternary(proxy_username, '<undefined>') }}"
- name: Push a file (Windows)
win_get_url:
url: https://github.com/ansible/ansible/archive/v2.8.5.tar.gz
dest: C:\\Windows\\Temp\\
use_proxy: "{{ use_proxy }}"
proxy_url: "{{ proxy | default('') }}"
proxy_password: "{{ proxy_password | default('') }}"
proxy_username: "{{ proxy_username | default('') }}"
when: ansible_os_family == "Windows"
```
The `use_proxy` variable is defaulted to `no` in our `defaults/main.yml` and the rest have to be supplied if needed. After upgrading to 2.8 we have to instead supply a valid URL (ie the module is parsing out scheme, host, port and fails if not valid) and use `default('undefined')` for the credentials parameters.
##### EXPECTED RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
changed: [ci-agent-w16]
```
##### ACTUAL RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: at <ScriptBlock>, <No file>: line 39
fatal: [ci-agent-w16]: FAILED! => {"changed": false, "msg": "Unhandled exception while executing module: Exception calling \"Create\" with \"2\" argument(s): \"String cannot be of zero length.\r\nParameter name: oldValue\""}
```
|
https://github.com/ansible/ansible/issues/62613
|
https://github.com/ansible/ansible/pull/62804
|
1b3bf33bdf2530adb3a7cbf8055007acb09b3bb2
|
322e22583018a8f775c79baaa15021d799eb564e
| 2019-09-19T18:15:29Z |
python
| 2019-09-25T01:45:53Z |
changelogs/fragments/ansible_basic_no_log_empty_string.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,613 |
win_get_url fails with empty string proxy parameters
|
##### SUMMARY
We have to support public and private environments where we may or may not have a proxy, so we configure tasks with proxy conditional on a `use_proxy` variable in our roles. After upgrading to 2.8 the `win_get_url` module fails if the proxy username and password paramters have an empty string value. Same code worked in 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 8 2019, 18:17:52) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_LOCAL_TMP(/etc/ansible/ansible.cfg) = /tmp/ansible-$USER/ansible-local-1013gsipsbln
TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = false
```
##### OS / ENVIRONMENT
Docker container based on `runatlantis/atlantis:v0.8.2`.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- debug:
msg:
- "proxy: {{ proxy }}"
- "proxy_password: {{ proxy_password is defined | ternary('<masked>', '<undefined>') }}"
- "proxy_username: {{ proxy_username is defined | ternary(proxy_username, '<undefined>') }}"
- name: Push a file (Windows)
win_get_url:
url: https://github.com/ansible/ansible/archive/v2.8.5.tar.gz
dest: C:\\Windows\\Temp\\
use_proxy: "{{ use_proxy }}"
proxy_url: "{{ proxy | default('') }}"
proxy_password: "{{ proxy_password | default('') }}"
proxy_username: "{{ proxy_username | default('') }}"
when: ansible_os_family == "Windows"
```
The `use_proxy` variable is defaulted to `no` in our `defaults/main.yml` and the rest have to be supplied if needed. After upgrading to 2.8 we have to instead supply a valid URL (ie the module is parsing out scheme, host, port and fails if not valid) and use `default('undefined')` for the credentials parameters.
##### EXPECTED RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
changed: [ci-agent-w16]
```
##### ACTUAL RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: at <ScriptBlock>, <No file>: line 39
fatal: [ci-agent-w16]: FAILED! => {"changed": false, "msg": "Unhandled exception while executing module: Exception calling \"Create\" with \"2\" argument(s): \"String cannot be of zero length.\r\nParameter name: oldValue\""}
```
|
https://github.com/ansible/ansible/issues/62613
|
https://github.com/ansible/ansible/pull/62804
|
1b3bf33bdf2530adb3a7cbf8055007acb09b3bb2
|
322e22583018a8f775c79baaa15021d799eb564e
| 2019-09-19T18:15:29Z |
python
| 2019-09-25T01:45:53Z |
lib/ansible/module_utils/csharp/Ansible.Basic.cs
|
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Security.AccessControl;
using System.Security.Principal;
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
// System.Diagnostics.EventLog.dll reference different versioned dlls that are
// loaded in PSCore, ignore CS1702 so the code will ignore this warning
//NoWarn -Name CS1702 -CLR Core
//AssemblyReference -Name Newtonsoft.Json.dll -CLR Core
//AssemblyReference -Name System.ComponentModel.Primitives.dll -CLR Core
//AssemblyReference -Name System.Diagnostics.EventLog.dll -CLR Core
//AssemblyReference -Name System.IO.FileSystem.AccessControl.dll -CLR Core
//AssemblyReference -Name System.Security.Principal.Windows.dll -CLR Core
//AssemblyReference -Name System.Security.AccessControl.dll -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
namespace Ansible.Basic
{
public class AnsibleModule
{
public delegate void ExitHandler(int rc);
public static ExitHandler Exit = new ExitHandler(ExitModule);
public delegate void WriteLineHandler(string line);
public static WriteLineHandler WriteLine = new WriteLineHandler(WriteLineModule);
private static List<string> BOOLEANS_TRUE = new List<string>() { "y", "yes", "on", "1", "true", "t", "1.0" };
private static List<string> BOOLEANS_FALSE = new List<string>() { "n", "no", "off", "0", "false", "f", "0.0" };
private string remoteTmp = Path.GetTempPath();
private string tmpdir = null;
private HashSet<string> noLogValues = new HashSet<string>();
private List<string> optionsContext = new List<string>();
private List<string> warnings = new List<string>();
private List<Dictionary<string, string>> deprecations = new List<Dictionary<string, string>>();
private List<string> cleanupFiles = new List<string>();
private Dictionary<string, string> passVars = new Dictionary<string, string>()
{
// null values means no mapping, not used in Ansible.Basic.AnsibleModule
{ "check_mode", "CheckMode" },
{ "debug", "DebugMode" },
{ "diff", "DiffMode" },
{ "keep_remote_files", "KeepRemoteFiles" },
{ "module_name", "ModuleName" },
{ "no_log", "NoLog" },
{ "remote_tmp", "remoteTmp" },
{ "selinux_special_fs", null },
{ "shell_executable", null },
{ "socket", null },
{ "string_conversion_action", null },
{ "syslog_facility", null },
{ "tmpdir", "tmpdir" },
{ "verbosity", "Verbosity" },
{ "version", "AnsibleVersion" },
};
private List<string> passBools = new List<string>() { "check_mode", "debug", "diff", "keep_remote_files", "no_log" };
private List<string> passInts = new List<string>() { "verbosity" };
private Dictionary<string, List<object>> specDefaults = new Dictionary<string, List<object>>()
{
// key - (default, type) - null is freeform
{ "apply_defaults", new List<object>() { false, typeof(bool) } },
{ "aliases", new List<object>() { typeof(List<string>), typeof(List<string>) } },
{ "choices", new List<object>() { typeof(List<object>), typeof(List<object>) } },
{ "default", new List<object>() { null, null } },
{ "elements", new List<object>() { null, null } },
{ "mutually_exclusive", new List<object>() { typeof(List<List<string>>), null } },
{ "no_log", new List<object>() { false, typeof(bool) } },
{ "options", new List<object>() { typeof(Hashtable), typeof(Hashtable) } },
{ "removed_in_version", new List<object>() { null, typeof(string) } },
{ "required", new List<object>() { false, typeof(bool) } },
{ "required_by", new List<object>() { typeof(Hashtable), typeof(Hashtable) } },
{ "required_if", new List<object>() { typeof(List<List<object>>), null } },
{ "required_one_of", new List<object>() { typeof(List<List<string>>), null } },
{ "required_together", new List<object>() { typeof(List<List<string>>), null } },
{ "supports_check_mode", new List<object>() { false, typeof(bool) } },
{ "type", new List<object>() { "str", null } },
};
private Dictionary<string, Delegate> optionTypes = new Dictionary<string, Delegate>()
{
{ "bool", new Func<object, bool>(ParseBool) },
{ "dict", new Func<object, Dictionary<string, object>>(ParseDict) },
{ "float", new Func<object, float>(ParseFloat) },
{ "int", new Func<object, int>(ParseInt) },
{ "json", new Func<object, string>(ParseJson) },
{ "list", new Func<object, List<object>>(ParseList) },
{ "path", new Func<object, string>(ParsePath) },
{ "raw", new Func<object, object>(ParseRaw) },
{ "sid", new Func<object, SecurityIdentifier>(ParseSid) },
{ "str", new Func<object, string>(ParseStr) },
};
public Dictionary<string, object> Diff = new Dictionary<string, object>();
public IDictionary Params = null;
public Dictionary<string, object> Result = new Dictionary<string, object>() { { "changed", false } };
public bool CheckMode { get; private set; }
public bool DebugMode { get; private set; }
public bool DiffMode { get; private set; }
public bool KeepRemoteFiles { get; private set; }
public string ModuleName { get; private set; }
public bool NoLog { get; private set; }
public int Verbosity { get; private set; }
public string AnsibleVersion { get; private set; }
public string Tmpdir
{
get
{
if (tmpdir == null)
{
SecurityIdentifier user = WindowsIdentity.GetCurrent().User;
DirectorySecurity dirSecurity = new DirectorySecurity();
dirSecurity.SetOwner(user);
dirSecurity.SetAccessRuleProtection(true, false); // disable inheritance rules
FileSystemAccessRule ace = new FileSystemAccessRule(user, FileSystemRights.FullControl,
InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit,
PropagationFlags.None, AccessControlType.Allow);
dirSecurity.AddAccessRule(ace);
string baseDir = Path.GetFullPath(Environment.ExpandEnvironmentVariables(remoteTmp));
if (!Directory.Exists(baseDir))
{
string failedMsg = null;
try
{
#if CORECLR
DirectoryInfo createdDir = Directory.CreateDirectory(baseDir);
FileSystemAclExtensions.SetAccessControl(createdDir, dirSecurity);
#else
Directory.CreateDirectory(baseDir, dirSecurity);
#endif
}
catch (Exception e)
{
failedMsg = String.Format("Failed to create base tmpdir '{0}': {1}", baseDir, e.Message);
}
if (failedMsg != null)
{
string envTmp = Path.GetTempPath();
Warn(String.Format("Unable to use '{0}' as temporary directory, falling back to system tmp '{1}': {2}", baseDir, envTmp, failedMsg));
baseDir = envTmp;
}
else
{
NTAccount currentUser = (NTAccount)user.Translate(typeof(NTAccount));
string warnMsg = String.Format("Module remote_tmp {0} did not exist and was created with FullControl to {1}, ", baseDir, currentUser.ToString());
warnMsg += "this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually";
Warn(warnMsg);
}
}
string dateTime = DateTime.Now.ToFileTime().ToString();
string dirName = String.Format("ansible-moduletmp-{0}-{1}", dateTime, new Random().Next(0, int.MaxValue));
string newTmpdir = Path.Combine(baseDir, dirName);
#if CORECLR
DirectoryInfo tmpdirInfo = Directory.CreateDirectory(newTmpdir);
FileSystemAclExtensions.SetAccessControl(tmpdirInfo, dirSecurity);
#else
Directory.CreateDirectory(newTmpdir, dirSecurity);
#endif
tmpdir = newTmpdir;
if (!KeepRemoteFiles)
cleanupFiles.Add(tmpdir);
}
return tmpdir;
}
}
public AnsibleModule(string[] args, IDictionary argumentSpec)
{
// NoLog is not set yet, we cannot rely on FailJson to sanitize the output
// Do the minimum amount to get this running before we actually parse the params
Dictionary<string, string> aliases = new Dictionary<string, string>();
try
{
ValidateArgumentSpec(argumentSpec);
Params = GetParams(args);
aliases = GetAliases(argumentSpec, Params);
SetNoLogValues(argumentSpec, Params);
}
catch (Exception e)
{
Dictionary<string, object> result = new Dictionary<string, object>
{
{ "failed", true },
{ "msg", String.Format("internal error: {0}", e.Message) },
{ "exception", e.ToString() }
};
WriteLine(ToJson(result));
Exit(1);
}
// Initialise public properties to the defaults before we parse the actual inputs
CheckMode = false;
DebugMode = false;
DiffMode = false;
KeepRemoteFiles = false;
ModuleName = "undefined win module";
NoLog = (bool)argumentSpec["no_log"];
Verbosity = 0;
AppDomain.CurrentDomain.ProcessExit += CleanupFiles;
List<string> legalInputs = passVars.Keys.Select(v => "_ansible_" + v).ToList();
legalInputs.AddRange(((IDictionary)argumentSpec["options"]).Keys.Cast<string>().ToList());
legalInputs.AddRange(aliases.Keys.Cast<string>().ToList());
CheckArguments(argumentSpec, Params, legalInputs);
// Set a Ansible friendly invocation value in the result object
Dictionary<string, object> invocation = new Dictionary<string, object>() { { "module_args", Params } };
Result["invocation"] = RemoveNoLogValues(invocation, noLogValues);
if (!NoLog)
LogEvent(String.Format("Invoked with:\r\n {0}", FormatLogData(Params, 2)), sanitise: false);
}
public static AnsibleModule Create(string[] args, IDictionary argumentSpec)
{
return new AnsibleModule(args, argumentSpec);
}
public void Debug(string message)
{
if (DebugMode)
LogEvent(String.Format("[DEBUG] {0}", message));
}
public void Deprecate(string message, string version)
{
deprecations.Add(new Dictionary<string, string>() { { "msg", message }, { "version", version } });
LogEvent(String.Format("[DEPRECATION WARNING] {0} {1}", message, version));
}
public void ExitJson()
{
WriteLine(GetFormattedResults(Result));
CleanupFiles(null, null);
Exit(0);
}
public void FailJson(string message) { FailJson(message, null, null); }
public void FailJson(string message, ErrorRecord psErrorRecord) { FailJson(message, psErrorRecord, null); }
public void FailJson(string message, Exception exception) { FailJson(message, null, exception); }
private void FailJson(string message, ErrorRecord psErrorRecord, Exception exception)
{
Result["failed"] = true;
Result["msg"] = RemoveNoLogValues(message, noLogValues);
if (!Result.ContainsKey("exception") && (Verbosity > 2 || DebugMode))
{
if (psErrorRecord != null)
{
string traceback = String.Format("{0}\r\n{1}", psErrorRecord.ToString(), psErrorRecord.InvocationInfo.PositionMessage);
traceback += String.Format("\r\n + CategoryInfo : {0}", psErrorRecord.CategoryInfo.ToString());
traceback += String.Format("\r\n + FullyQualifiedErrorId : {0}", psErrorRecord.FullyQualifiedErrorId.ToString());
traceback += String.Format("\r\n\r\nScriptStackTrace:\r\n{0}", psErrorRecord.ScriptStackTrace);
Result["exception"] = traceback;
}
else if (exception != null)
Result["exception"] = exception.ToString();
}
WriteLine(GetFormattedResults(Result));
CleanupFiles(null, null);
Exit(1);
}
public void LogEvent(string message, EventLogEntryType logEntryType = EventLogEntryType.Information, bool sanitise = true)
{
if (NoLog)
return;
string logSource = "Ansible";
bool logSourceExists = false;
try
{
logSourceExists = EventLog.SourceExists(logSource);
}
catch (System.Security.SecurityException) { } // non admin users may not have permission
if (!logSourceExists)
{
try
{
EventLog.CreateEventSource(logSource, "Application");
}
catch (System.Security.SecurityException)
{
// Cannot call Warn as that calls LogEvent and we get stuck in a loop
warnings.Add(String.Format("Access error when creating EventLog source {0}, logging to the Application source instead", logSource));
logSource = "Application";
}
}
if (sanitise)
message = (string)RemoveNoLogValues(message, noLogValues);
message = String.Format("{0} - {1}", ModuleName, message);
using (EventLog eventLog = new EventLog("Application"))
{
eventLog.Source = logSource;
try
{
eventLog.WriteEntry(message, logEntryType, 0);
}
catch (System.InvalidOperationException) { } // Ignore permission errors on the Application event log
catch (System.Exception e)
{
// Cannot call Warn as that calls LogEvent and we get stuck in a loop
warnings.Add(String.Format("Unknown error when creating event log entry: {0}", e.Message));
}
}
}
public void Warn(string message)
{
warnings.Add(message);
LogEvent(String.Format("[WARNING] {0}", message), EventLogEntryType.Warning);
}
public static object FromJson(string json) { return FromJson<object>(json); }
public static T FromJson<T>(string json)
{
#if CORECLR
return JsonConvert.DeserializeObject<T>(json);
#else
JavaScriptSerializer jss = new JavaScriptSerializer();
jss.MaxJsonLength = int.MaxValue;
jss.RecursionLimit = int.MaxValue;
return jss.Deserialize<T>(json);
#endif
}
public static string ToJson(object obj)
{
// Using PowerShell to serialize the JSON is preferable over the native .NET libraries as it handles
// PS Objects a lot better than the alternatives. In case we are debugging in Visual Studio we have a
// fallback to the other libraries as we won't be dealing with PowerShell objects there.
if (Runspace.DefaultRunspace != null)
{
PSObject rawOut = ScriptBlock.Create("ConvertTo-Json -InputObject $args[0] -Depth 99 -Compress").Invoke(obj)[0];
return rawOut.BaseObject as string;
}
else
{
#if CORECLR
return JsonConvert.SerializeObject(obj);
#else
JavaScriptSerializer jss = new JavaScriptSerializer();
jss.MaxJsonLength = int.MaxValue;
jss.RecursionLimit = int.MaxValue;
return jss.Serialize(obj);
#endif
}
}
public static IDictionary GetParams(string[] args)
{
if (args.Length > 0)
{
string inputJson = File.ReadAllText(args[0]);
Dictionary<string, object> rawParams = FromJson<Dictionary<string, object>>(inputJson);
if (!rawParams.ContainsKey("ANSIBLE_MODULE_ARGS"))
throw new ArgumentException("Module was unable to get ANSIBLE_MODULE_ARGS value from the argument path json");
return (IDictionary)rawParams["ANSIBLE_MODULE_ARGS"];
}
else
{
// $complex_args is already a Hashtable, no need to waste time converting to a dictionary
PSObject rawArgs = ScriptBlock.Create("$complex_args").Invoke()[0];
return rawArgs.BaseObject as Hashtable;
}
}
public static bool ParseBool(object value)
{
if (value.GetType() == typeof(bool))
return (bool)value;
List<string> booleans = new List<string>();
booleans.AddRange(BOOLEANS_TRUE);
booleans.AddRange(BOOLEANS_FALSE);
string stringValue = ParseStr(value).ToLowerInvariant().Trim();
if (BOOLEANS_TRUE.Contains(stringValue))
return true;
else if (BOOLEANS_FALSE.Contains(stringValue))
return false;
string msg = String.Format("The value '{0}' is not a valid boolean. Valid booleans include: {1}",
stringValue, String.Join(", ", booleans));
throw new ArgumentException(msg);
}
public static Dictionary<string, object> ParseDict(object value)
{
Type valueType = value.GetType();
if (valueType == typeof(Dictionary<string, object>))
return (Dictionary<string, object>)value;
else if (value is IDictionary)
return ((IDictionary)value).Cast<DictionaryEntry>().ToDictionary(kvp => (string)kvp.Key, kvp => kvp.Value);
else if (valueType == typeof(string))
{
string stringValue = (string)value;
if (stringValue.StartsWith("{") && stringValue.EndsWith("}"))
return FromJson<Dictionary<string, object>>((string)value);
else if (stringValue.IndexOfAny(new char[1] { '=' }) != -1)
{
List<string> fields = new List<string>();
List<char> fieldBuffer = new List<char>();
char? inQuote = null;
bool inEscape = false;
string field;
foreach (char c in stringValue.ToCharArray())
{
if (inEscape)
{
fieldBuffer.Add(c);
inEscape = false;
}
else if (c == '\\')
inEscape = true;
else if (inQuote == null && (c == '\'' || c == '"'))
inQuote = c;
else if (inQuote != null && c == inQuote)
inQuote = null;
else if (inQuote == null && (c == ',' || c == ' '))
{
field = String.Join("", fieldBuffer);
if (field != "")
fields.Add(field);
fieldBuffer = new List<char>();
}
else
fieldBuffer.Add(c);
}
field = String.Join("", fieldBuffer);
if (field != "")
fields.Add(field);
return fields.Distinct().Select(i => i.Split(new[] { '=' }, 2)).ToDictionary(i => i[0], i => i.Length > 1 ? (object)i[1] : null);
}
else
throw new ArgumentException("string cannot be converted to a dict, must either be a JSON string or in the key=value form");
}
throw new ArgumentException(String.Format("{0} cannot be converted to a dict", valueType.FullName));
}
public static float ParseFloat(object value)
{
if (value.GetType() == typeof(float))
return (float)value;
string valueStr = ParseStr(value);
return float.Parse(valueStr);
}
public static int ParseInt(object value)
{
Type valueType = value.GetType();
if (valueType == typeof(int))
return (int)value;
else
return Int32.Parse(ParseStr(value));
}
public static string ParseJson(object value)
{
// mostly used to ensure a dict is a json string as it may
// have been converted on the controller side
Type valueType = value.GetType();
if (value is IDictionary)
return ToJson(value);
else if (valueType == typeof(string))
return (string)value;
else
throw new ArgumentException(String.Format("{0} cannot be converted to json", valueType.FullName));
}
public static List<object> ParseList(object value)
{
if (value == null)
return null;
Type valueType = value.GetType();
if (valueType.IsGenericType && valueType.GetGenericTypeDefinition() == typeof(List<>))
return (List<object>)value;
else if (valueType == typeof(ArrayList))
return ((ArrayList)value).Cast<object>().ToList();
else if (valueType.IsArray)
return ((object[])value).ToList();
else if (valueType == typeof(string))
return ((string)value).Split(',').Select(s => s.Trim()).ToList<object>();
else if (valueType == typeof(int))
return new List<object>() { value };
else
throw new ArgumentException(String.Format("{0} cannot be converted to a list", valueType.FullName));
}
public static string ParsePath(object value)
{
string stringValue = ParseStr(value);
// do not validate, expand the env vars if it starts with \\?\ as
// it is a special path designed for the NT kernel to interpret
if (stringValue.StartsWith(@"\\?\"))
return stringValue;
stringValue = Environment.ExpandEnvironmentVariables(stringValue);
if (stringValue.IndexOfAny(Path.GetInvalidPathChars()) != -1)
throw new ArgumentException("string value contains invalid path characters, cannot convert to path");
// will fire an exception if it contains any invalid chars
Path.GetFullPath(stringValue);
return stringValue;
}
public static object ParseRaw(object value) { return value; }
public static SecurityIdentifier ParseSid(object value)
{
string stringValue = ParseStr(value);
try
{
return new SecurityIdentifier(stringValue);
}
catch (ArgumentException) { } // ignore failures string may not have been a SID
NTAccount account = new NTAccount(stringValue);
return (SecurityIdentifier)account.Translate(typeof(SecurityIdentifier));
}
public static string ParseStr(object value) { return value.ToString(); }
private void ValidateArgumentSpec(IDictionary argumentSpec)
{
Dictionary<string, object> changedValues = new Dictionary<string, object>();
foreach (DictionaryEntry entry in argumentSpec)
{
string key = (string)entry.Key;
// validate the key is a valid argument spec key
if (!specDefaults.ContainsKey(key))
{
string msg = String.Format("argument spec entry contains an invalid key '{0}', valid keys: {1}",
key, String.Join(", ", specDefaults.Keys));
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
// ensure the value is casted to the type we expect
Type optionType = null;
if (entry.Value != null)
optionType = (Type)specDefaults[key][1];
if (optionType != null)
{
Type actualType = entry.Value.GetType();
bool invalid = false;
if (optionType.IsGenericType && optionType.GetGenericTypeDefinition() == typeof(List<>))
{
// verify the actual type is not just a single value of the list type
Type entryType = optionType.GetGenericArguments()[0];
bool isArray = actualType.IsArray && (actualType.GetElementType() == entryType || actualType.GetElementType() == typeof(object));
if (actualType == entryType || isArray)
{
object[] rawArray;
if (isArray)
rawArray = (object[])entry.Value;
else
rawArray = new object[1] { entry.Value };
MethodInfo castMethod = typeof(Enumerable).GetMethod("Cast").MakeGenericMethod(entryType);
MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList").MakeGenericMethod(entryType);
var enumerable = castMethod.Invoke(null, new object[1] { rawArray });
var newList = toListMethod.Invoke(null, new object[1] { enumerable });
changedValues.Add(key, newList);
}
else if (actualType != optionType && !(actualType == typeof(List<object>)))
invalid = true;
}
else
invalid = actualType != optionType;
if (invalid)
{
string msg = String.Format("argument spec for '{0}' did not match expected type {1}: actual type {2}",
key, optionType.FullName, actualType.FullName);
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
}
// recursively validate the spec
if (key == "options" && entry.Value != null)
{
IDictionary optionsSpec = (IDictionary)entry.Value;
foreach (DictionaryEntry optionEntry in optionsSpec)
{
optionsContext.Add((string)optionEntry.Key);
IDictionary optionMeta = (IDictionary)optionEntry.Value;
ValidateArgumentSpec(optionMeta);
optionsContext.RemoveAt(optionsContext.Count - 1);
}
}
// validate the type and elements key type values are known types
if (key == "type" || key == "elements" && entry.Value != null)
{
Type valueType = entry.Value.GetType();
if (valueType == typeof(string))
{
string typeValue = (string)entry.Value;
if (!optionTypes.ContainsKey(typeValue))
{
string msg = String.Format("{0} '{1}' is unsupported", key, typeValue);
msg = String.Format("{0}. Valid types are: {1}", FormatOptionsContext(msg, " - "), String.Join(", ", optionTypes.Keys));
throw new ArgumentException(msg);
}
}
else if (!(entry.Value is Delegate))
{
string msg = String.Format("{0} must either be a string or delegate, was: {1}", key, valueType.FullName);
throw new ArgumentException(FormatOptionsContext(msg, " - "));
}
}
}
// Outside of the spec iterator, change the values that were casted above
foreach (KeyValuePair<string, object> changedValue in changedValues)
argumentSpec[changedValue.Key] = changedValue.Value;
// Now make sure all the metadata keys are set to their defaults
foreach (KeyValuePair<string, List<object>> metadataEntry in specDefaults)
{
List<object> defaults = metadataEntry.Value;
object defaultValue = defaults[0];
if (defaultValue != null && defaultValue.GetType() == typeof(Type).GetType())
defaultValue = Activator.CreateInstance((Type)defaultValue);
if (!argumentSpec.Contains(metadataEntry.Key))
argumentSpec[metadataEntry.Key] = defaultValue;
}
}
private Dictionary<string, string> GetAliases(IDictionary argumentSpec, IDictionary parameters)
{
Dictionary<string, string> aliasResults = new Dictionary<string, string>();
foreach (DictionaryEntry entry in (IDictionary)argumentSpec["options"])
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
List<string> aliases = (List<string>)v["aliases"];
object defaultValue = v["default"];
bool required = (bool)v["required"];
if (defaultValue != null && required)
throw new ArgumentException(String.Format("required and default are mutually exclusive for {0}", k));
foreach (string alias in aliases)
{
aliasResults.Add(alias, k);
if (parameters.Contains(alias))
parameters[k] = parameters[alias];
}
}
return aliasResults;
}
private void SetNoLogValues(IDictionary argumentSpec, IDictionary parameters)
{
foreach (DictionaryEntry entry in (IDictionary)argumentSpec["options"])
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
if ((bool)v["no_log"])
{
object noLogObject = parameters.Contains(k) ? parameters[k] : null;
if (noLogObject != null)
noLogValues.Add(noLogObject.ToString());
}
object removedInVersion = v["removed_in_version"];
if (removedInVersion != null && parameters.Contains(k))
Deprecate(String.Format("Param '{0}' is deprecated. See the module docs for more information", k), removedInVersion.ToString());
}
}
private void CheckArguments(IDictionary spec, IDictionary param, List<string> legalInputs)
{
// initially parse the params and check for unsupported ones and set internal vars
CheckUnsupportedArguments(param, legalInputs);
// Only run this check if we are at the root argument (optionsContext.Count == 0)
if (CheckMode && !(bool)spec["supports_check_mode"] && optionsContext.Count == 0)
{
Result["skipped"] = true;
Result["msg"] = String.Format("remote module ({0}) does not support check mode", ModuleName);
ExitJson();
}
IDictionary optionSpec = (IDictionary)spec["options"];
CheckMutuallyExclusive(param, (IList)spec["mutually_exclusive"]);
CheckRequiredArguments(optionSpec, param);
// set the parameter types based on the type spec value
foreach (DictionaryEntry entry in optionSpec)
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
object value = param.Contains(k) ? param[k] : null;
if (value != null)
{
// convert the current value to the wanted type
Delegate typeConverter;
string type;
if (v["type"].GetType() == typeof(string))
{
type = (string)v["type"];
typeConverter = optionTypes[type];
}
else
{
type = "delegate";
typeConverter = (Delegate)v["type"];
}
try
{
value = typeConverter.DynamicInvoke(value);
param[k] = value;
}
catch (Exception e)
{
string msg = String.Format("argument for {0} is of type {1} and we were unable to convert to {2}: {3}",
k, value.GetType(), type, e.InnerException.Message);
FailJson(FormatOptionsContext(msg));
}
// ensure it matches the choices if there are choices set
List<string> choices = ((List<object>)v["choices"]).Select(x => x.ToString()).Cast<string>().ToList();
if (choices.Count > 0)
{
List<string> values;
string choiceMsg;
if (type == "list")
{
values = ((List<object>)value).Select(x => x.ToString()).Cast<string>().ToList();
choiceMsg = "one or more of";
}
else
{
values = new List<string>() { value.ToString() };
choiceMsg = "one of";
}
List<string> diffList = values.Except(choices, StringComparer.OrdinalIgnoreCase).ToList();
List<string> caseDiffList = values.Except(choices).ToList();
if (diffList.Count > 0)
{
string msg = String.Format("value of {0} must be {1}: {2}. Got no match for: {3}",
k, choiceMsg, String.Join(", ", choices), String.Join(", ", diffList));
FailJson(FormatOptionsContext(msg));
}
/*
For now we will just silently accept case insensitive choices, uncomment this if we want to add it back in
else if (caseDiffList.Count > 0)
{
// For backwards compatibility with Legacy.psm1 we need to be matching choices that are not case sensitive.
// We will warn the user it was case insensitive and tell them this will become case sensitive in the future.
string msg = String.Format(
"value of {0} was a case insensitive match of {1}: {2}. Checking of choices will be case sensitive in a future Ansible release. Case insensitive matches were: {3}",
k, choiceMsg, String.Join(", ", choices), String.Join(", ", caseDiffList.Select(x => RemoveNoLogValues(x, noLogValues)))
);
Warn(FormatOptionsContext(msg));
}*/
}
}
}
CheckRequiredTogether(param, (IList)spec["required_together"]);
CheckRequiredOneOf(param, (IList)spec["required_one_of"]);
CheckRequiredIf(param, (IList)spec["required_if"]);
CheckRequiredBy(param, (IDictionary)spec["required_by"]);
// finally ensure all missing parameters are set to null and handle sub options
foreach (DictionaryEntry entry in optionSpec)
{
string k = (string)entry.Key;
IDictionary v = (IDictionary)entry.Value;
if (!param.Contains(k))
param[k] = null;
CheckSubOption(param, k, v);
}
}
private void CheckUnsupportedArguments(IDictionary param, List<string> legalInputs)
{
HashSet<string> unsupportedParameters = new HashSet<string>();
HashSet<string> caseUnsupportedParameters = new HashSet<string>();
List<string> removedParameters = new List<string>();
foreach (DictionaryEntry entry in param)
{
string paramKey = (string)entry.Key;
if (!legalInputs.Contains(paramKey, StringComparer.OrdinalIgnoreCase))
unsupportedParameters.Add(paramKey);
else if (!legalInputs.Contains(paramKey))
// For backwards compatibility we do not care about the case but we need to warn the users as this will
// change in a future Ansible release.
caseUnsupportedParameters.Add(paramKey);
else if (paramKey.StartsWith("_ansible_"))
{
removedParameters.Add(paramKey);
string key = paramKey.Replace("_ansible_", "");
// skip setting NoLog if NoLog is already set to true (set by the module)
// or there's no mapping for this key
if ((key == "no_log" && NoLog == true) || (passVars[key] == null))
continue;
object value = entry.Value;
if (passBools.Contains(key))
value = ParseBool(value);
else if (passInts.Contains(key))
value = ParseInt(value);
string propertyName = passVars[key];
PropertyInfo property = typeof(AnsibleModule).GetProperty(propertyName);
FieldInfo field = typeof(AnsibleModule).GetField(propertyName, BindingFlags.NonPublic | BindingFlags.Instance);
if (property != null)
property.SetValue(this, value, null);
else if (field != null)
field.SetValue(this, value);
else
FailJson(String.Format("implementation error: unknown AnsibleModule property {0}", propertyName));
}
}
foreach (string parameter in removedParameters)
param.Remove(parameter);
if (unsupportedParameters.Count > 0)
{
legalInputs.RemoveAll(x => passVars.Keys.Contains(x.Replace("_ansible_", "")));
string msg = String.Format("Unsupported parameters for ({0}) module: {1}", ModuleName, String.Join(", ", unsupportedParameters));
msg = String.Format("{0}. Supported parameters include: {1}", FormatOptionsContext(msg), String.Join(", ", legalInputs));
FailJson(msg);
}
/*
// Uncomment when we want to start warning users around options that are not a case sensitive match to the spec
if (caseUnsupportedParameters.Count > 0)
{
legalInputs.RemoveAll(x => passVars.Keys.Contains(x.Replace("_ansible_", "")));
string msg = String.Format("Parameters for ({0}) was a case insensitive match: {1}", ModuleName, String.Join(", ", caseUnsupportedParameters));
msg = String.Format("{0}. Module options will become case sensitive in a future Ansible release. Supported parameters include: {1}",
FormatOptionsContext(msg), String.Join(", ", legalInputs));
Warn(msg);
}*/
// Make sure we convert all the incorrect case params to the ones set by the module spec
foreach (string key in caseUnsupportedParameters)
{
string correctKey = legalInputs[legalInputs.FindIndex(s => s.Equals(key, StringComparison.OrdinalIgnoreCase))];
object value = param[key];
param.Remove(key);
param.Add(correctKey, value);
}
}
private void CheckMutuallyExclusive(IDictionary param, IList mutuallyExclusive)
{
if (mutuallyExclusive == null)
return;
foreach (object check in mutuallyExclusive)
{
List<string> mutualCheck = ((IList)check).Cast<string>().ToList();
int count = 0;
foreach (string entry in mutualCheck)
if (param.Contains(entry))
count++;
if (count > 1)
{
string msg = String.Format("parameters are mutually exclusive: {0}", String.Join(", ", mutualCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredArguments(IDictionary spec, IDictionary param)
{
List<string> missing = new List<string>();
foreach (DictionaryEntry entry in spec)
{
string k = (string)entry.Key;
Hashtable v = (Hashtable)entry.Value;
// set defaults for values not already set
object defaultValue = v["default"];
if (defaultValue != null && !param.Contains(k))
param[k] = defaultValue;
// check required arguments
bool required = (bool)v["required"];
if (required && !param.Contains(k))
missing.Add(k);
}
if (missing.Count > 0)
{
string msg = String.Format("missing required arguments: {0}", String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
private void CheckRequiredTogether(IDictionary param, IList requiredTogether)
{
if (requiredTogether == null)
return;
foreach (object check in requiredTogether)
{
List<string> requiredCheck = ((IList)check).Cast<string>().ToList();
List<bool> found = new List<bool>();
foreach (string field in requiredCheck)
if (param.Contains(field))
found.Add(true);
else
found.Add(false);
if (found.Contains(true) && found.Contains(false))
{
string msg = String.Format("parameters are required together: {0}", String.Join(", ", requiredCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredOneOf(IDictionary param, IList requiredOneOf)
{
if (requiredOneOf == null)
return;
foreach (object check in requiredOneOf)
{
List<string> requiredCheck = ((IList)check).Cast<string>().ToList();
int count = 0;
foreach (string field in requiredCheck)
if (param.Contains(field))
count++;
if (count == 0)
{
string msg = String.Format("one of the following is required: {0}", String.Join(", ", requiredCheck));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredIf(IDictionary param, IList requiredIf)
{
if (requiredIf == null)
return;
foreach (object check in requiredIf)
{
IList requiredCheck = (IList)check;
List<string> missing = new List<string>();
List<string> missingFields = new List<string>();
int maxMissingCount = 1;
bool oneRequired = false;
if (requiredCheck.Count < 3 && requiredCheck.Count < 4)
FailJson(String.Format("internal error: invalid required_if value count of {0}, expecting 3 or 4 entries", requiredCheck.Count));
else if (requiredCheck.Count == 4)
oneRequired = (bool)requiredCheck[3];
string key = (string)requiredCheck[0];
object val = requiredCheck[1];
IList requirements = (IList)requiredCheck[2];
if (ParseStr(param[key]) != ParseStr(val))
continue;
string term = "all";
if (oneRequired)
{
maxMissingCount = requirements.Count;
term = "any";
}
foreach (string required in requirements.Cast<string>())
if (!param.Contains(required))
missing.Add(required);
if (missing.Count >= maxMissingCount)
{
string msg = String.Format("{0} is {1} but {2} of the following are missing: {3}",
key, val.ToString(), term, String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckRequiredBy(IDictionary param, IDictionary requiredBy)
{
foreach (DictionaryEntry entry in requiredBy)
{
string key = (string)entry.Key;
if (!param.Contains(key))
continue;
List<string> missing = new List<string>();
List<string> requires = ParseList(entry.Value).Cast<string>().ToList();
foreach (string required in requires)
if (!param.Contains(required))
missing.Add(required);
if (missing.Count > 0)
{
string msg = String.Format("missing parameter(s) required by '{0}': {1}", key, String.Join(", ", missing));
FailJson(FormatOptionsContext(msg));
}
}
}
private void CheckSubOption(IDictionary param, string key, IDictionary spec)
{
object value = param[key];
string type;
if (spec["type"].GetType() == typeof(string))
type = (string)spec["type"];
else
type = "delegate";
string elements = null;
Delegate typeConverter = null;
if (spec["elements"] != null && spec["elements"].GetType() == typeof(string))
{
elements = (string)spec["elements"];
typeConverter = optionTypes[elements];
}
else if (spec["elements"] != null)
{
elements = "delegate";
typeConverter = (Delegate)spec["elements"];
}
if (!(type == "dict" || (type == "list" && elements != null)))
// either not a dict, or list with the elements set, so continue
return;
else if (type == "list")
{
// cast each list element to the type specified
if (value == null)
return;
List<object> newValue = new List<object>();
foreach (object element in (List<object>)value)
{
if (elements == "dict")
newValue.Add(ParseSubSpec(spec, element, key));
else
{
try
{
object newElement = typeConverter.DynamicInvoke(element);
newValue.Add(newElement);
}
catch (Exception e)
{
string msg = String.Format("argument for list entry {0} is of type {1} and we were unable to convert to {2}: {3}",
key, element.GetType(), elements, e.Message);
FailJson(FormatOptionsContext(msg));
}
}
}
param[key] = newValue;
}
else
param[key] = ParseSubSpec(spec, value, key);
}
private object ParseSubSpec(IDictionary spec, object value, string context)
{
bool applyDefaults = (bool)spec["apply_defaults"];
// set entry to an empty dict if apply_defaults is set
IDictionary optionsSpec = (IDictionary)spec["options"];
if (applyDefaults && optionsSpec.Keys.Count > 0 && value == null)
value = new Dictionary<string, object>();
else if (optionsSpec.Keys.Count == 0 || value == null)
return value;
optionsContext.Add(context);
Dictionary<string, object> newValue = (Dictionary<string, object>)ParseDict(value);
Dictionary<string, string> aliases = GetAliases(spec, newValue);
SetNoLogValues(spec, newValue);
List<string> subLegalInputs = optionsSpec.Keys.Cast<string>().ToList();
subLegalInputs.AddRange(aliases.Keys.Cast<string>().ToList());
CheckArguments(spec, newValue, subLegalInputs);
optionsContext.RemoveAt(optionsContext.Count - 1);
return newValue;
}
private string GetFormattedResults(Dictionary<string, object> result)
{
if (!result.ContainsKey("invocation"))
result["invocation"] = new Dictionary<string, object>() { { "module_args", RemoveNoLogValues(Params, noLogValues) } };
if (warnings.Count > 0)
result["warnings"] = warnings;
if (deprecations.Count > 0)
result["deprecations"] = deprecations;
if (Diff.Count > 0 && DiffMode)
result["diff"] = Diff;
return ToJson(result);
}
private string FormatLogData(object data, int indentLevel)
{
if (data == null)
return "$null";
string msg = "";
if (data is IList)
{
string newMsg = "";
foreach (object value in (IList)data)
{
string entryValue = FormatLogData(value, indentLevel + 2);
newMsg += String.Format("\r\n{0}- {1}", new String(' ', indentLevel), entryValue);
}
msg += newMsg;
}
else if (data is IDictionary)
{
bool start = true;
foreach (DictionaryEntry entry in (IDictionary)data)
{
string newMsg = FormatLogData(entry.Value, indentLevel + 2);
if (!start)
msg += String.Format("\r\n{0}", new String(' ', indentLevel));
msg += String.Format("{0}: {1}", (string)entry.Key, newMsg);
start = false;
}
}
else
msg = (string)RemoveNoLogValues(ParseStr(data), noLogValues);
return msg;
}
private object RemoveNoLogValues(object value, HashSet<string> noLogStrings)
{
Queue<Tuple<object, object>> deferredRemovals = new Queue<Tuple<object, object>>();
object newValue = RemoveValueConditions(value, noLogStrings, deferredRemovals);
while (deferredRemovals.Count > 0)
{
Tuple<object, object> data = deferredRemovals.Dequeue();
object oldData = data.Item1;
object newData = data.Item2;
if (oldData is IDictionary)
{
foreach (DictionaryEntry entry in (IDictionary)oldData)
{
object newElement = RemoveValueConditions(entry.Value, noLogStrings, deferredRemovals);
((IDictionary)newData).Add((string)entry.Key, newElement);
}
}
else
{
foreach (object element in (IList)oldData)
{
object newElement = RemoveValueConditions(element, noLogStrings, deferredRemovals);
((IList)newData).Add(newElement);
}
}
}
return newValue;
}
private object RemoveValueConditions(object value, HashSet<string> noLogStrings, Queue<Tuple<object, object>> deferredRemovals)
{
if (value == null)
return value;
Type valueType = value.GetType();
HashSet<Type> numericTypes = new HashSet<Type>
{
typeof(byte), typeof(sbyte), typeof(short), typeof(ushort), typeof(int), typeof(uint),
typeof(long), typeof(ulong), typeof(decimal), typeof(double), typeof(float)
};
if (numericTypes.Contains(valueType) || valueType == typeof(bool))
{
string valueString = ParseStr(value);
if (noLogStrings.Contains(valueString))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
foreach (string omitMe in noLogStrings)
if (valueString.Contains(omitMe))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
}
else if (valueType == typeof(DateTime))
value = ((DateTime)value).ToString("o");
else if (value is IList)
{
List<object> newValue = new List<object>();
deferredRemovals.Enqueue(new Tuple<object, object>((IList)value, newValue));
value = newValue;
}
else if (value is IDictionary)
{
Hashtable newValue = new Hashtable();
deferredRemovals.Enqueue(new Tuple<object, object>((IDictionary)value, newValue));
value = newValue;
}
else
{
string stringValue = value.ToString();
if (noLogStrings.Contains(stringValue))
return "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER";
foreach (string omitMe in noLogStrings)
if (stringValue.Contains(omitMe))
return (stringValue).Replace(omitMe, "********");
value = stringValue;
}
return value;
}
private void CleanupFiles(object s, EventArgs ev)
{
foreach (string path in cleanupFiles)
{
if (File.Exists(path))
File.Delete(path);
else if (Directory.Exists(path))
Directory.Delete(path, true);
}
cleanupFiles = new List<string>();
}
private string FormatOptionsContext(string msg, string prefix = " ")
{
if (optionsContext.Count > 0)
msg += String.Format("{0}found in {1}", prefix, String.Join(" -> ", optionsContext));
return msg;
}
[DllImport("kernel32.dll")]
private static extern IntPtr GetConsoleWindow();
private static void ExitModule(int rc)
{
// When running in a Runspace Environment.Exit will kill the entire
// process which is not what we want, detect if we are in a
// Runspace and call a ScriptBlock with exit instead.
if (Runspace.DefaultRunspace != null)
ScriptBlock.Create("Set-Variable -Name LASTEXITCODE -Value $args[0] -Scope Global; exit $args[0]").Invoke(rc);
else
{
// Used for local debugging in Visual Studio
if (System.Diagnostics.Debugger.IsAttached)
{
Console.WriteLine("Press enter to continue...");
Console.ReadLine();
}
Environment.Exit(rc);
}
}
private static void WriteLineModule(string line)
{
Console.WriteLine(line);
}
}
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,613 |
win_get_url fails with empty string proxy parameters
|
##### SUMMARY
We have to support public and private environments where we may or may not have a proxy, so we configure tasks with proxy conditional on a `use_proxy` variable in our roles. After upgrading to 2.8 the `win_get_url` module fails if the proxy username and password paramters have an empty string value. Same code worked in 2.7
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Apr 8 2019, 18:17:52) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_LOCAL_TMP(/etc/ansible/ansible.cfg) = /tmp/ansible-$USER/ansible-local-1013gsipsbln
TRANSFORM_INVALID_GROUP_CHARS(/etc/ansible/ansible.cfg) = false
```
##### OS / ENVIRONMENT
Docker container based on `runatlantis/atlantis:v0.8.2`.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- debug:
msg:
- "proxy: {{ proxy }}"
- "proxy_password: {{ proxy_password is defined | ternary('<masked>', '<undefined>') }}"
- "proxy_username: {{ proxy_username is defined | ternary(proxy_username, '<undefined>') }}"
- name: Push a file (Windows)
win_get_url:
url: https://github.com/ansible/ansible/archive/v2.8.5.tar.gz
dest: C:\\Windows\\Temp\\
use_proxy: "{{ use_proxy }}"
proxy_url: "{{ proxy | default('') }}"
proxy_password: "{{ proxy_password | default('') }}"
proxy_username: "{{ proxy_username | default('') }}"
when: ansible_os_family == "Windows"
```
The `use_proxy` variable is defaulted to `no` in our `defaults/main.yml` and the rest have to be supplied if needed. After upgrading to 2.8 we have to instead supply a valid URL (ie the module is parsing out scheme, host, port and fails if not valid) and use `default('undefined')` for the credentials parameters.
##### EXPECTED RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
changed: [ci-agent-w16]
```
##### ACTUAL RESULTS
```
TASK [build : Push a file (Windows)] *******************************************
skipping: [ci-agent-co7]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: at <ScriptBlock>, <No file>: line 39
fatal: [ci-agent-w16]: FAILED! => {"changed": false, "msg": "Unhandled exception while executing module: Exception calling \"Create\" with \"2\" argument(s): \"String cannot be of zero length.\r\nParameter name: oldValue\""}
```
|
https://github.com/ansible/ansible/issues/62613
|
https://github.com/ansible/ansible/pull/62804
|
1b3bf33bdf2530adb3a7cbf8055007acb09b3bb2
|
322e22583018a8f775c79baaa15021d799eb564e
| 2019-09-19T18:15:29Z |
python
| 2019-09-25T01:45:53Z |
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1
|
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
$module = [Ansible.Basic.AnsibleModule]::Create($args, @{})
Function Assert-Equals {
param(
[Parameter(Mandatory=$true, ValueFromPipeline=$true)][AllowNull()]$Actual,
[Parameter(Mandatory=$true, Position=0)][AllowNull()]$Expected
)
$matched = $false
if ($Actual -is [System.Collections.ArrayList] -or $Actual -is [Array]) {
$Actual.Count | Assert-Equals -Expected $Expected.Count
for ($i = 0; $i -lt $Actual.Count; $i++) {
$actual_value = $Actual[$i]
$expected_value = $Expected[$i]
Assert-Equals -Actual $actual_value -Expected $expected_value
}
$matched = $true
} else {
$matched = $Actual -ceq $Expected
}
if (-not $matched) {
if ($Actual -is [PSObject]) {
$Actual = $Actual.ToString()
}
$call_stack = (Get-PSCallStack)[1]
$module.Result.failed = $true
$module.Result.test = $test
$module.Result.actual = $Actual
$module.Result.expected = $Expected
$module.Result.line = $call_stack.ScriptLineNumber
$module.Result.method = $call_stack.Position.Text
$module.Result.msg = "AssertionError: actual != expected"
Exit-Module
}
}
Function Assert-DictionaryEquals {
param(
[Parameter(Mandatory=$true, ValueFromPipeline=$true)][AllowNull()]$Actual,
[Parameter(Mandatory=$true, Position=0)][AllowNull()]$Expected
)
$actual_keys = $Actual.Keys
$expected_keys = $Expected.Keys
$actual_keys.Count | Assert-Equals -Expected $expected_keys.Count
foreach ($actual_entry in $Actual.GetEnumerator()) {
$actual_key = $actual_entry.Key
($actual_key -cin $expected_keys) | Assert-Equals -Expected $true
$actual_value = $actual_entry.Value
$expected_value = $Expected.$actual_key
if ($actual_value -is [System.Collections.IDictionary]) {
$actual_value | Assert-DictionaryEquals -Expected $expected_value
} elseif ($actual_value -is [System.Collections.ArrayList] -or $actual_value -is [Array]) {
for ($i = 0; $i -lt $actual_value.Count; $i++) {
$actual_entry = $actual_value[$i]
$expected_entry = $expected_value[$i]
if ($actual_entry -is [System.Collections.IDictionary]) {
$actual_entry | Assert-DictionaryEquals -Expected $expected_entry
} else {
Assert-Equals -Actual $actual_entry -Expected $expected_entry
}
}
} else {
Assert-Equals -Actual $actual_value -Expected $expected_value
}
}
foreach ($expected_key in $expected_keys) {
($expected_key -cin $actual_keys) | Assert-Equals -Expected $true
}
}
Function Exit-Module {
# Make sure Exit actually calls exit and not our overriden test behaviour
[Ansible.Basic.AnsibleModule]::Exit = { param([Int32]$rc) exit $rc }
Write-Output -InputObject (ConvertTo-Json -InputObject $module.Result -Compress -Depth 99)
$module.ExitJson()
}
$tmpdir = $module.Tmpdir
# Override the Exit and WriteLine behaviour to throw an exception instead of exiting the module
[Ansible.Basic.AnsibleModule]::Exit = {
param([Int32]$rc)
$exp = New-Object -TypeName System.Exception -ArgumentList "exit: $rc"
$exp | Add-Member -Type NoteProperty -Name Output -Value $_test_out
throw $exp
}
[Ansible.Basic.AnsibleModule]::WriteLine = {
param([String]$line)
Set-Variable -Name _test_out -Scope Global -Value $line
}
$tests = @{
"Empty spec and no options - args file" = {
$args_file = Join-Path -Path $tmpdir -ChildPath "args-$(Get-Random).json"
[System.IO.File]::WriteAllText($args_file, '{ "ANSIBLE_MODULE_ARGS": {} }')
$m = [Ansible.Basic.AnsibleModule]::Create(@($args_file), @{})
$m.CheckMode | Assert-Equals -Expected $false
$m.DebugMode | Assert-Equals -Expected $false
$m.DiffMode | Assert-Equals -Expected $false
$m.KeepRemoteFiles | Assert-Equals -Expected $false
$m.ModuleName | Assert-Equals -Expected "undefined win module"
$m.NoLog | Assert-Equals -Expected $false
$m.Verbosity | Assert-Equals -Expected 0
$m.AnsibleVersion | Assert-Equals -Expected $null
}
"Empty spec and no options - complex_args" = {
$complex_args = @{}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.CheckMode | Assert-Equals -Expected $false
$m.DebugMode | Assert-Equals -Expected $false
$m.DiffMode | Assert-Equals -Expected $false
$m.KeepRemoteFiles | Assert-Equals -Expected $false
$m.ModuleName | Assert-Equals -Expected "undefined win module"
$m.NoLog | Assert-Equals -Expected $false
$m.Verbosity | Assert-Equals -Expected 0
$m.AnsibleVersion | Assert-Equals -Expected $null
}
"Internal param changes - args file" = {
$m_tmpdir = Join-Path -Path $tmpdir -ChildPath "moduletmpdir-$(Get-Random)"
New-Item -Path $m_tmpdir -ItemType Directory > $null
$args_file = Join-Path -Path $tmpdir -ChildPath "args-$(Get-Random).json"
[System.IO.File]::WriteAllText($args_file, @"
{
"ANSIBLE_MODULE_ARGS": {
"_ansible_check_mode": true,
"_ansible_debug": true,
"_ansible_diff": true,
"_ansible_keep_remote_files": true,
"_ansible_module_name": "ansible_basic_tests",
"_ansible_no_log": true,
"_ansible_remote_tmp": "%TEMP%",
"_ansible_selinux_special_fs": "ignored",
"_ansible_shell_executable": "ignored",
"_ansible_socket": "ignored",
"_ansible_syslog_facility": "ignored",
"_ansible_tmpdir": "$($m_tmpdir -replace "\\", "\\")",
"_ansible_verbosity": 3,
"_ansible_version": "2.8.0"
}
}
"@)
$m = [Ansible.Basic.AnsibleModule]::Create(@($args_file), @{supports_check_mode=$true})
$m.CheckMode | Assert-Equals -Expected $true
$m.DebugMode | Assert-Equals -Expected $true
$m.DiffMode | Assert-Equals -Expected $true
$m.KeepRemoteFiles | Assert-Equals -Expected $true
$m.ModuleName | Assert-Equals -Expected "ansible_basic_tests"
$m.NoLog | Assert-Equals -Expected $true
$m.Verbosity | Assert-Equals -Expected 3
$m.AnsibleVersion | Assert-Equals -Expected "2.8.0"
$m.Tmpdir | Assert-Equals -Expected $m_tmpdir
}
"Internal param changes - complex_args" = {
$m_tmpdir = Join-Path -Path $tmpdir -ChildPath "moduletmpdir-$(Get-Random)"
New-Item -Path $m_tmpdir -ItemType Directory > $null
$complex_args = @{
_ansible_check_mode = $true
_ansible_debug = $true
_ansible_diff = $true
_ansible_keep_remote_files = $true
_ansible_module_name = "ansible_basic_tests"
_ansible_no_log = $true
_ansible_remote_tmp = "%TEMP%"
_ansible_selinux_special_fs = "ignored"
_ansible_shell_executable = "ignored"
_ansible_socket = "ignored"
_ansible_syslog_facility = "ignored"
_ansible_tmpdir = $m_tmpdir.ToString()
_ansible_verbosity = 3
_ansible_version = "2.8.0"
}
$spec = @{
supports_check_mode = $true
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$m.CheckMode | Assert-Equals -Expected $true
$m.DebugMode | Assert-Equals -Expected $true
$m.DiffMode | Assert-Equals -Expected $true
$m.KeepRemoteFiles | Assert-Equals -Expected $true
$m.ModuleName | Assert-Equals -Expected "ansible_basic_tests"
$m.NoLog | Assert-Equals -Expected $true
$m.Verbosity | Assert-Equals -Expected 3
$m.AnsibleVersion | Assert-Equals -Expected "2.8.0"
$m.Tmpdir | Assert-Equals -Expected $m_tmpdir
}
"Parse complex module options" = {
$spec = @{
options = @{
option_default = @{}
missing_option_default = @{}
string_option = @{type = "str"}
required_option = @{required = $true}
missing_choices = @{choices = "a", "b"}
choices = @{choices = "a", "b"}
one_choice = @{choices = ,"b"}
choice_with_default = @{choices = "a", "b"; default = "b"}
alias_direct = @{aliases = ,"alias_direct1"}
alias_as_alias = @{aliases = "alias_as_alias1", "alias_as_alias2"}
bool_type = @{type = "bool"}
bool_from_str = @{type = "bool"}
dict_type = @{
type = "dict"
options = @{
int_type = @{type = "int"}
str_type = @{type = "str"; default = "str_sub_type"}
}
}
dict_type_missing = @{
type = "dict"
options = @{
int_type = @{type = "int"}
str_type = @{type = "str"; default = "str_sub_type"}
}
}
dict_type_defaults = @{
type = "dict"
apply_defaults = $true
options = @{
int_type = @{type = "int"}
str_type = @{type = "str"; default = "str_sub_type"}
}
}
dict_type_json = @{type = "dict"}
dict_type_str = @{type = "dict"}
float_type = @{type = "float"}
int_type = @{type = "int"}
json_type = @{type = "json"}
json_type_dict = @{type = "json"}
list_type = @{type = "list"}
list_type_str = @{type = "list"}
list_with_int = @{type = "list"; elements = "int"}
list_type_single = @{type = "list"}
list_with_dict = @{
type = "list"
elements = "dict"
options = @{
int_type = @{type = "int"}
str_type = @{type = "str"; default = "str_sub_type"}
}
}
path_type = @{type = "path"}
path_type_nt = @{type = "path"}
path_type_missing = @{type = "path"}
raw_type_str = @{type = "raw"}
raw_type_int = @{type = "raw"}
sid_type = @{type = "sid"}
sid_from_name = @{type = "sid"}
str_type = @{type = "str"}
delegate_type = @{type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) }}
}
}
$complex_args = @{
option_default = 1
string_option = 1
required_option = "required"
choices = "a"
one_choice = "b"
alias_direct = "a"
alias_as_alias2 = "a"
bool_type = $true
bool_from_str = "false"
dict_type = @{
int_type = "10"
}
dict_type_json = '{"a":"a","b":1,"c":["a","b"]}'
dict_type_str = 'a=a b="b 2" c=c'
float_type = "3.14159"
int_type = 0
json_type = '{"a":"a","b":1,"c":["a","b"]}'
json_type_dict = @{
a = "a"
b = 1
c = @("a", "b")
}
list_type = @("a", "b", 1, 2)
list_type_str = "a, b,1,2 "
list_with_int = @("1", 2)
list_type_single = "single"
list_with_dict = @(
@{
int_type = 2
str_type = "dict entry"
},
@{ int_type = 1 },
@{}
)
path_type = "%SystemRoot%\System32"
path_type_nt = "\\?\%SystemRoot%\System32"
path_type_missing = "T:\missing\path"
raw_type_str = "str"
raw_type_int = 1
sid_type = "S-1-5-18"
sid_from_name = "SYSTEM"
str_type = "str"
delegate_type = "1234"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$m.Params.option_default | Assert-Equals -Expected "1"
$m.Params.option_default.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.missing_option_default | Assert-Equals -Expected $null
$m.Params.string_option | Assert-Equals -Expected "1"
$m.Params.string_option.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.required_option | Assert-Equals -Expected "required"
$m.Params.required_option.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.missing_choices | Assert-Equals -Expected $null
$m.Params.choices | Assert-Equals -Expected "a"
$m.Params.choices.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.one_choice | Assert-Equals -Expected "b"
$m.Params.one_choice.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.choice_with_default | Assert-Equals -Expected "b"
$m.Params.choice_with_default.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.alias_direct | Assert-Equals -Expected "a"
$m.Params.alias_direct.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.alias_as_alias | Assert-Equals -Expected "a"
$m.Params.alias_as_alias.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.bool_type | Assert-Equals -Expected $true
$m.Params.bool_type.GetType().ToString() | Assert-Equals -Expected "System.Boolean"
$m.Params.bool_from_str | Assert-Equals -Expected $false
$m.Params.bool_from_str.GetType().ToString() | Assert-Equals -Expected "System.Boolean"
$m.Params.dict_type | Assert-DictionaryEquals -Expected @{int_type = 10; str_type = "str_sub_type"}
$m.Params.dict_type.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.Dictionary``2[System.String,System.Object]"
$m.Params.dict_type.int_type.GetType().ToString() | Assert-Equals -Expected "System.Int32"
$m.Params.dict_type.str_type.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.dict_type_missing | Assert-Equals -Expected $null
$m.Params.dict_type_defaults | Assert-DictionaryEquals -Expected @{int_type = $null; str_type = "str_sub_type"}
$m.Params.dict_type_defaults.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.Dictionary``2[System.String,System.Object]"
$m.Params.dict_type_defaults.str_type.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.dict_type_json | Assert-DictionaryEquals -Expected @{
a = "a"
b = 1
c = @("a", "b")
}
$m.Params.dict_type_json.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.Dictionary``2[System.String,System.Object]"
$m.Params.dict_type_json.a.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.dict_type_json.b.GetType().ToString() | Assert-Equals -Expected "System.Int32"
$m.Params.dict_type_json.c.GetType().ToString() | Assert-Equals -Expected "System.Collections.ArrayList"
$m.Params.dict_type_str | Assert-DictionaryEquals -Expected @{a = "a"; b = "b 2"; c = "c"}
$m.Params.dict_type_str.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.Dictionary``2[System.String,System.Object]"
$m.Params.dict_type_str.a.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.dict_type_str.b.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.dict_type_str.c.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.float_type | Assert-Equals -Expected ([System.Single]3.14159)
$m.Params.float_type.GetType().ToString() | Assert-Equals -Expected "System.Single"
$m.Params.int_type | Assert-Equals -Expected 0
$m.Params.int_type.GetType().ToString() | Assert-Equals -Expected "System.Int32"
$m.Params.json_type | Assert-Equals -Expected '{"a":"a","b":1,"c":["a","b"]}'
$m.Params.json_type.GetType().ToString() | Assert-Equals -Expected "System.String"
[Ansible.Basic.AnsibleModule]::FromJson($m.Params.json_type_dict) | Assert-DictionaryEquals -Expected ([Ansible.Basic.AnsibleModule]::FromJson('{"a":"a","b":1,"c":["a","b"]}'))
$m.Params.json_type_dict.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.list_type.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.List``1[System.Object]"
$m.Params.list_type.Count | Assert-Equals -Expected 4
$m.Params.list_type[0] | Assert-Equals -Expected "a"
$m.Params.list_type[0].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_type[1] | Assert-Equals -Expected "b"
$m.Params.list_type[1].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_type[2] | Assert-Equals -Expected 1
$m.Params.list_type[2].GetType().FullName | Assert-Equals -Expected "System.Int32"
$m.Params.list_type[3] | Assert-Equals -Expected 2
$m.Params.list_type[3].GetType().FullName | Assert-Equals -Expected "System.Int32"
$m.Params.list_type_str.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.List``1[System.Object]"
$m.Params.list_type_str.Count | Assert-Equals -Expected 4
$m.Params.list_type_str[0] | Assert-Equals -Expected "a"
$m.Params.list_type_str[0].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_type_str[1] | Assert-Equals -Expected "b"
$m.Params.list_type_str[1].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_type_str[2] | Assert-Equals -Expected "1"
$m.Params.list_type_str[2].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_type_str[3] | Assert-Equals -Expected "2"
$m.Params.list_type_str[3].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_with_int.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.List``1[System.Object]"
$m.Params.list_with_int.Count | Assert-Equals -Expected 2
$m.Params.list_with_int[0] | Assert-Equals -Expected 1
$m.Params.list_with_int[0].GetType().FullName | Assert-Equals -Expected "System.Int32"
$m.Params.list_with_int[1] | Assert-Equals -Expected 2
$m.Params.list_with_int[1].GetType().FullName | Assert-Equals -Expected "System.Int32"
$m.Params.list_type_single.GetType().ToString() | Assert-Equals -Expected "System.Collections.Generic.List``1[System.Object]"
$m.Params.list_type_single.Count | Assert-Equals -Expected 1
$m.Params.list_type_single[0] | Assert-Equals -Expected "single"
$m.Params.list_type_single[0].GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.list_with_dict.GetType().FullName.StartsWith("System.Collections.Generic.List``1[[System.Object") | Assert-Equals -Expected $true
$m.Params.list_with_dict.Count | Assert-Equals -Expected 3
$m.Params.list_with_dict[0].GetType().FullName.StartsWith("System.Collections.Generic.Dictionary``2[[System.String") | Assert-Equals -Expected $true
$m.Params.list_with_dict[0] | Assert-DictionaryEquals -Expected @{int_type = 2; str_type = "dict entry"}
$m.Params.list_with_dict[0].int_type.GetType().FullName.ToString() | Assert-Equals -Expected "System.Int32"
$m.Params.list_with_dict[0].str_type.GetType().FullName.ToString() | Assert-Equals -Expected "System.String"
$m.Params.list_with_dict[1].GetType().FullName.StartsWith("System.Collections.Generic.Dictionary``2[[System.String") | Assert-Equals -Expected $true
$m.Params.list_with_dict[1] | Assert-DictionaryEquals -Expected @{int_type = 1; str_type = "str_sub_type"}
$m.Params.list_with_dict[1].int_type.GetType().FullName.ToString() | Assert-Equals -Expected "System.Int32"
$m.Params.list_with_dict[1].str_type.GetType().FullName.ToString() | Assert-Equals -Expected "System.String"
$m.Params.list_with_dict[2].GetType().FullName.StartsWith("System.Collections.Generic.Dictionary``2[[System.String") | Assert-Equals -Expected $true
$m.Params.list_with_dict[2] | Assert-DictionaryEquals -Expected @{int_type = $null; str_type = "str_sub_type"}
$m.Params.list_with_dict[2].str_type.GetType().FullName.ToString() | Assert-Equals -Expected "System.String"
$m.Params.path_type | Assert-Equals -Expected "$($env:SystemRoot)\System32"
$m.Params.path_type.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.path_type_nt | Assert-Equals -Expected "\\?\%SystemRoot%\System32"
$m.Params.path_type_nt.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.path_type_missing | Assert-Equals -Expected "T:\missing\path"
$m.Params.path_type_missing.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.raw_type_str | Assert-Equals -Expected "str"
$m.Params.raw_type_str.GetType().FullName | Assert-Equals -Expected "System.String"
$m.Params.raw_type_int | Assert-Equals -Expected 1
$m.Params.raw_type_int.GetType().FullName | Assert-Equals -Expected "System.Int32"
$m.Params.sid_type | Assert-Equals -Expected (New-Object -TypeName System.Security.Principal.SecurityIdentifier -ArgumentList "S-1-5-18")
$m.Params.sid_type.GetType().ToString() | Assert-Equals -Expected "System.Security.Principal.SecurityIdentifier"
$m.Params.sid_from_name | Assert-Equals -Expected (New-Object -TypeName System.Security.Principal.SecurityIdentifier -ArgumentList "S-1-5-18")
$m.Params.sid_from_name.GetType().ToString() | Assert-Equals -Expected "System.Security.Principal.SecurityIdentifier"
$m.Params.str_type | Assert-Equals -Expected "str"
$m.Params.str_type.GetType().ToString() | Assert-Equals -Expected "System.String"
$m.Params.delegate_type | Assert-Equals -Expected 1234
$m.Params.delegate_type.GetType().ToString() | Assert-Equals -Expected "System.UInt64"
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_module_args = @{
option_default = "1"
missing_option_default = $null
string_option = "1"
required_option = "required"
missing_choices = $null
choices = "a"
one_choice = "b"
choice_with_default = "b"
alias_direct = "a"
alias_as_alias = "a"
alias_as_alias2 = "a"
bool_type = $true
bool_from_str = $false
dict_type = @{
int_type = 10
str_type = "str_sub_type"
}
dict_type_missing = $null
dict_type_defaults = @{
int_type = $null
str_type = "str_sub_type"
}
dict_type_json = @{
a = "a"
b = 1
c = @("a", "b")
}
dict_type_str = @{
a = "a"
b = "b 2"
c = "c"
}
float_type = 3.14159
int_type = 0
json_type = $m.Params.json_type.ToString()
json_type_dict = $m.Params.json_type_dict.ToString()
list_type = @("a", "b", 1, 2)
list_type_str = @("a", "b", "1", "2")
list_with_int = @(1, 2)
list_type_single = @("single")
list_with_dict = @(
@{
int_type = 2
str_type = "dict entry"
},
@{
int_type = 1
str_type = "str_sub_type"
},
@{
int_type = $null
str_type = "str_sub_type"
}
)
path_type = "$($env:SystemRoot)\System32"
path_type_nt = "\\?\%SystemRoot%\System32"
path_type_missing = "T:\missing\path"
raw_type_str = "str"
raw_type_int = 1
sid_type = "S-1-5-18"
sid_from_name = "S-1-5-18"
str_type = "str"
delegate_type = 1234
}
$actual.Keys.Count | Assert-Equals -Expected 2
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $expected_module_args}
}
"Parse module args with list elements and delegate type" = {
$spec = @{
options = @{
list_delegate_type = @{
type = "list"
elements = [Func[[Object], [UInt16]]]{ [System.UInt16]::Parse($args[0]) }
}
}
}
$complex_args = @{
list_delegate_type = @(
"1234",
4321
)
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$m.Params.list_delegate_type.GetType().Name | Assert-Equals -Expected 'List`1'
$m.Params.list_delegate_type[0].GetType().FullName | Assert-Equals -Expected "System.UInt16"
$m.Params.list_delegate_Type[1].GetType().FullName | Assert-Equals -Expected "System.UInt16"
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_module_args = @{
list_delegate_type = @(
1234,
4321
)
}
$actual.Keys.Count | Assert-Equals -Expected 2
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $expected_module_args}
}
"Parse module args with case insensitive input" = {
$spec = @{
options = @{
option1 = @{ type = "int"; required = $true }
}
}
$complex_args = @{
_ansible_module_name = "win_test"
Option1 = "1"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
# Verifies the case of the params key is set to the module spec not actual input
$m.Params.Keys | Assert-Equals -Expected @("option1")
$m.Params.option1 | Assert-Equals -Expected 1
# Verifies the type conversion happens even on a case insensitive match
$m.Params.option1.GetType().FullName | Assert-Equals -Expected "System.Int32"
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_warnings = "Parameters for (win_test) was a case insensitive match: Option1. "
$expected_warnings += "Module options will become case sensitive in a future Ansible release. "
$expected_warnings += "Supported parameters include: option1"
$expected = @{
changed = $false
invocation = @{
module_args = @{
option1 = 1
}
}
# We have disabled the warning for now
#warnings = @($expected_warnings)
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"No log values" = {
$spec = @{
options = @{
username = @{type = "str"}
password = @{type = "str"; no_log = $true}
password2 = @{type = "int"; no_log = $true}
dict = @{type = "dict"}
}
}
$complex_args = @{
_ansible_module_name = "test_no_log"
username = "user - pass - name"
password = "pass"
password2 = 1234
dict = @{
data = "Oops this is secret: pass"
dict = @{
pass = "plain"
hide = "pass"
sub_hide = "password"
int_hide = 123456
}
list = @(
"pass",
"password",
1234567,
"pa ss",
@{
pass = "plain"
hide = "pass"
sub_hide = "password"
int_hide = 123456
}
)
custom = "pass"
}
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$m.Result.data = $complex_args.dict
# verify params internally aren't masked
$m.Params.username | Assert-Equals -Expected "user - pass - name"
$m.Params.password | Assert-Equals -Expected "pass"
$m.Params.password2 | Assert-Equals -Expected 1234
$m.Params.dict.custom | Assert-Equals -Expected "pass"
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
# verify no_log params are masked in invocation
$expected = @{
invocation = @{
module_args = @{
password2 = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
dict = @{
dict = @{
pass = "plain"
hide = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
sub_hide = "********word"
int_hide = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
}
custom = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
list = @(
"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"********word",
"VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"pa ss",
@{
pass = "plain"
hide = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
sub_hide = "********word"
int_hide = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
}
)
data = "Oops this is secret: ********"
}
username = "user - ******** - name"
password = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
}
}
changed = $false
data = $complex_args.dict
}
$actual | Assert-DictionaryEquals -Expected $expected
$expected_event = @'
test_no_log - Invoked with:
username: user - ******** - name
dict: dict: sub_hide: ****word
pass: plain
int_hide: ********56
hide: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
data: Oops this is secret: ********
custom: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
list:
- VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
- ********word
- ********567
- pa ss
- sub_hide: ********word
pass: plain
int_hide: ********56
hide: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
password2: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
'@
$actual_event = (Get-EventLog -LogName Application -Source Ansible -Newest 1).Message
$actual_event | Assert-DictionaryEquals -Expected $expected_event
}
"Removed in version" = {
$spec = @{
options = @{
removed1 = @{removed_in_version = "2.1"}
removed2 = @{removed_in_version = "2.2"}
}
}
$complex_args = @{
removed1 = "value"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
invocation = @{
module_args = @{
removed1 = "value"
removed2 = $null
}
}
deprecations = @(
@{
msg = "Param 'removed1' is deprecated. See the module docs for more information"
version = "2.1"
}
)
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Required by - single value" = {
$spec = @{
options = @{
option1 = @{type = "str"}
option2 = @{type = "str"}
option3 = @{type = "str"}
}
required_by = @{
option1 = "option2"
}
}
$complex_args = @{
option1 = "option1"
option2 = "option2"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
invocation = @{
module_args = @{
option1 = "option1"
option2 = "option2"
option3 = $null
}
}
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Required by - multiple values" = {
$spec = @{
options = @{
option1 = @{type = "str"}
option2 = @{type = "str"}
option3 = @{type = "str"}
}
required_by = @{
option1 = "option2", "option3"
}
}
$complex_args = @{
option1 = "option1"
option2 = "option2"
option3 = "option3"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
invocation = @{
module_args = @{
option1 = "option1"
option2 = "option2"
option3 = "option3"
}
}
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Required by explicit null" = {
$spec = @{
options = @{
option1 = @{type = "str"}
option2 = @{type = "str"}
option3 = @{type = "str"}
}
required_by = @{
option1 = "option2"
}
}
$complex_args = @{
option1 = "option1"
option2 = $null
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
invocation = @{
module_args = @{
option1 = "option1"
option2 = $null
option3 = $null
}
}
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Required by failed - single value" = {
$spec = @{
options = @{
option1 = @{type = "str"}
option2 = @{type = "str"}
option3 = @{type = "str"}
}
required_by = @{
option1 = "option2"
}
}
$complex_args = @{
option1 = "option1"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
failed = $true
invocation = @{
module_args = @{
option1 = "option1"
}
}
msg = "missing parameter(s) required by 'option1': option2"
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Required by failed - multiple values" = {
$spec = @{
options = @{
option1 = @{type = "str"}
option2 = @{type = "str"}
option3 = @{type = "str"}
}
required_by = @{
option1 = "option2", "option3"
}
}
$complex_args = @{
option1 = "option1"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
failed = $true
invocation = @{
module_args = @{
option1 = "option1"
}
}
msg = "missing parameter(s) required by 'option1': option2, option3"
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Debug without debug set" = {
$complex_args = @{
_ansible_debug = $false
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.Debug("debug message")
$actual_event = (Get-EventLog -LogName Application -Source Ansible -Newest 1).Message
$actual_event | Assert-Equals -Expected "undefined win module - Invoked with:`r`n "
}
"Debug with debug set" = {
$complex_args = @{
_ansible_debug = $true
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.Debug("debug message")
$actual_event = (Get-EventLog -LogName Application -Source Ansible -Newest 1).Message
$actual_event | Assert-Equals -Expected "undefined win module - [DEBUG] debug message"
}
"Deprecate and warn" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.Deprecate("message", "2.8")
$actual_deprecate_event = Get-EventLog -LogName Application -Source Ansible -Newest 1
$m.Warn("warning")
$actual_warn_event = Get-EventLog -LogName Application -Source Ansible -Newest 1
$actual_deprecate_event.Message | Assert-Equals -Expected "undefined win module - [DEPRECATION WARNING] message 2.8"
$actual_warn_event.EntryType | Assert-Equals -Expected "Warning"
$actual_warn_event.Message | Assert-Equals -Expected "undefined win module - [WARNING] warning"
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
warnings = @("warning")
deprecations = @(@{msg = "message"; version = "2.8"})
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"FailJson with message" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$failed = $false
try {
$m.FailJson("fail message")
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
failed = $true
msg = "fail message"
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"FailJson with Exception" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
try {
[System.IO.Path]::GetFullPath($null)
} catch {
$excp = $_.Exception
}
$failed = $false
try {
$m.FailJson("fail message", $excp)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
failed = $true
msg = "fail message"
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"FailJson with ErrorRecord" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
try {
Get-Item -Path $null
} catch {
$error_record = $_
}
$failed = $false
try {
$m.FailJson("fail message", $error_record)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
failed = $true
msg = "fail message"
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"FailJson with Exception and verbosity 3" = {
$complex_args = @{
_ansible_verbosity = 3
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
try {
[System.IO.Path]::GetFullPath($null)
} catch {
$excp = $_.Exception
}
$failed = $false
try {
$m.FailJson("fail message", $excp)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = @{}}
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected "fail message"
$actual.exception.Contains('System.Management.Automation.MethodInvocationException: Exception calling "GetFullPath" with "1" argument(s)') | Assert-Equals -Expected $true
}
"FailJson with ErrorRecord and verbosity 3" = {
$complex_args = @{
_ansible_verbosity = 3
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
try {
Get-Item -Path $null
} catch {
$error_record = $_
}
$failed = $false
try {
$m.FailJson("fail message", $error_record)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = @{}}
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected "fail message"
$actual.exception.Contains("Cannot bind argument to parameter 'Path' because it is null") | Assert-Equals -Expected $true
$actual.exception.Contains("+ Get-Item -Path `$null") | Assert-Equals -Expected $true
$actual.exception.Contains("ScriptStackTrace:") | Assert-Equals -Expected $true
}
"Diff entry without diff set" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.Diff.before = @{a = "a"}
$m.Diff.after = @{b = "b"}
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"Diff entry with diff set" = {
$complex_args = @{
_ansible_diff = $true
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$m.Diff.before = @{a = "a"}
$m.Diff.after = @{b = "b"}
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $failed
$expected = @{
changed = $false
invocation = @{
module_args = @{}
}
diff = @{
before = @{a = "a"}
after = @{b = "b"}
}
}
$actual | Assert-DictionaryEquals -Expected $expected
}
"ParseBool tests" = {
$mapping = New-Object -TypeName 'System.Collections.Generic.Dictionary`2[[Object], [Bool]]'
$mapping.Add("y", $true)
$mapping.Add("Y", $true)
$mapping.Add("yes", $true)
$mapping.Add("Yes", $true)
$mapping.Add("on", $true)
$mapping.Add("On", $true)
$mapping.Add("1", $true)
$mapping.Add(1, $true)
$mapping.Add("true", $true)
$mapping.Add("True", $true)
$mapping.Add("t", $true)
$mapping.Add("T", $true)
$mapping.Add("1.0", $true)
$mapping.Add(1.0, $true)
$mapping.Add($true, $true)
$mapping.Add("n", $false)
$mapping.Add("N", $false)
$mapping.Add("no", $false)
$mapping.Add("No", $false)
$mapping.Add("off", $false)
$mapping.Add("Off", $false)
$mapping.Add("0", $false)
$mapping.Add(0, $false)
$mapping.Add("false", $false)
$mapping.Add("False", $false)
$mapping.Add("f", $false)
$mapping.Add("F", $false)
$mapping.Add("0.0", $false)
$mapping.Add(0.0, $false)
$mapping.Add($false, $false)
foreach ($map in $mapping.GetEnumerator()) {
$expected = $map.Value
$actual = [Ansible.Basic.AnsibleModule]::ParseBool($map.Key)
$actual | Assert-Equals -Expected $expected
$actual.GetType().FullName | Assert-Equals -Expected "System.Boolean"
}
$fail_bools = @(
"falsey",
"abc",
2,
"2",
-1
)
foreach ($fail_bool in $fail_bools) {
$failed = $false
try {
[Ansible.Basic.AnsibleModule]::ParseBool($fail_bool)
} catch {
$failed = $true
$_.Exception.Message.Contains("The value '$fail_bool' is not a valid boolean") | Assert-Equals -Expected $true
}
$failed | Assert-Equals -Expected $true
}
}
"Unknown internal key" = {
$complex_args = @{
_ansible_invalid = "invalid"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$expected = @{
invocation = @{
module_args = @{
_ansible_invalid = "invalid"
}
}
changed = $false
failed = $true
msg = "Unsupported parameters for (undefined win module) module: _ansible_invalid. Supported parameters include: "
}
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
$actual | Assert-DictionaryEquals -Expected $expected
}
$failed | Assert-Equals -Expected $true
}
"Module tmpdir with present remote tmp" = {
$current_user = [System.Security.Principal.WindowsIdentity]::GetCurrent().User
$dir_security = New-Object -TypeName System.Security.AccessControl.DirectorySecurity
$dir_security.SetOwner($current_user)
$dir_security.SetAccessRuleProtection($true, $false)
$ace = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList @(
$current_user, [System.Security.AccessControl.FileSystemRights]::FullControl,
[System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit",
[System.Security.AccessControl.PropagationFlags]::None, [System.Security.AccessControl.AccessControlType]::Allow
)
$dir_security.AddAccessRule($ace)
$expected_sd = $dir_security.GetSecurityDescriptorSddlForm("Access, Owner")
$remote_tmp = Join-Path -Path $tmpdir -ChildPath "moduletmpdir-$(Get-Random)"
New-Item -Path $remote_tmp -ItemType Directory > $null
$complex_args = @{
_ansible_remote_tmp = $remote_tmp.ToString()
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$actual_tmpdir = $m.Tmpdir
$parent_tmpdir = Split-Path -Path $actual_tmpdir -Parent
$tmpdir_name = Split-Path -Path $actual_tmpdir -Leaf
$parent_tmpdir | Assert-Equals -Expected $remote_tmp
$tmpdir_name.StartSwith("ansible-moduletmp-") | Assert-Equals -Expected $true
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $true
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$children = [System.IO.Directory]::EnumerateDirectories($remote_tmp)
$children.Count | Assert-Equals -Expected 1
$actual_tmpdir_sd = (Get-Acl -Path $actual_tmpdir).GetSecurityDescriptorSddlForm("Access, Owner")
$actual_tmpdir_sd | Assert-Equals -Expected $expected_sd
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $false
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$output.warnings.Count | Assert-Equals -Expected 0
}
"Module tmpdir with missing remote_tmp" = {
$current_user = [System.Security.Principal.WindowsIdentity]::GetCurrent().User
$dir_security = New-Object -TypeName System.Security.AccessControl.DirectorySecurity
$dir_security.SetOwner($current_user)
$dir_security.SetAccessRuleProtection($true, $false)
$ace = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList @(
$current_user, [System.Security.AccessControl.FileSystemRights]::FullControl,
[System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit",
[System.Security.AccessControl.PropagationFlags]::None, [System.Security.AccessControl.AccessControlType]::Allow
)
$dir_security.AddAccessRule($ace)
$expected_sd = $dir_security.GetSecurityDescriptorSddlForm("Access, Owner")
$remote_tmp = Join-Path -Path $tmpdir -ChildPath "moduletmpdir-$(Get-Random)"
$complex_args = @{
_ansible_remote_tmp = $remote_tmp.ToString()
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $false
$actual_tmpdir = $m.Tmpdir
$parent_tmpdir = Split-Path -Path $actual_tmpdir -Parent
$tmpdir_name = Split-Path -Path $actual_tmpdir -Leaf
$parent_tmpdir | Assert-Equals -Expected $remote_tmp
$tmpdir_name.StartSwith("ansible-moduletmp-") | Assert-Equals -Expected $true
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $true
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$children = [System.IO.Directory]::EnumerateDirectories($remote_tmp)
$children.Count | Assert-Equals -Expected 1
$actual_remote_sd = (Get-Acl -Path $remote_tmp).GetSecurityDescriptorSddlForm("Access, Owner")
$actual_tmpdir_sd = (Get-Acl -Path $actual_tmpdir).GetSecurityDescriptorSddlForm("Access, Owner")
$actual_remote_sd | Assert-Equals -Expected $expected_sd
$actual_tmpdir_sd | Assert-Equals -Expected $expected_sd
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $false
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$output.warnings.Count | Assert-Equals -Expected 1
$nt_account = $current_user.Translate([System.Security.Principal.NTAccount])
$actual_warning = "Module remote_tmp $remote_tmp did not exist and was created with FullControl to $nt_account, "
$actual_warning += "this may cause issues when running as another user. To avoid this, "
$actual_warning += "create the remote_tmp dir with the correct permissions manually"
$actual_warning | Assert-Equals -Expected $output.warnings[0]
}
"Module tmp, keep remote files" = {
$remote_tmp = Join-Path -Path $tmpdir -ChildPath "moduletmpdir-$(Get-Random)"
New-Item -Path $remote_tmp -ItemType Directory > $null
$complex_args = @{
_ansible_remote_tmp = $remote_tmp.ToString()
_ansible_keep_remote_files = $true
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
$actual_tmpdir = $m.Tmpdir
$parent_tmpdir = Split-Path -Path $actual_tmpdir -Parent
$tmpdir_name = Split-Path -Path $actual_tmpdir -Leaf
$parent_tmpdir | Assert-Equals -Expected $remote_tmp
$tmpdir_name.StartSwith("ansible-moduletmp-") | Assert-Equals -Expected $true
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $true
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
(Test-Path -Path $actual_tmpdir -PathType Container) | Assert-Equals -Expected $true
(Test-Path -Path $remote_tmp -PathType Container) | Assert-Equals -Expected $true
$output.warnings.Count | Assert-Equals -Expected 0
Remove-Item -Path $actual_tmpdir -Force -Recurse
}
"Invalid argument spec key" = {
$spec = @{
invalid = $true
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: argument spec entry contains an invalid key 'invalid', valid keys: apply_defaults, "
$expected_msg += "aliases, choices, default, elements, mutually_exclusive, no_log, options, removed_in_version, "
$expected_msg += "required, required_by, required_if, required_one_of, required_together, supports_check_mode, type"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Invalid argument spec key - nested" = {
$spec = @{
options = @{
option_key = @{
options = @{
sub_option_key = @{
invalid = $true
}
}
}
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: argument spec entry contains an invalid key 'invalid', valid keys: apply_defaults, "
$expected_msg += "aliases, choices, default, elements, mutually_exclusive, no_log, options, removed_in_version, "
$expected_msg += "required, required_by, required_if, required_one_of, required_together, supports_check_mode, type - "
$expected_msg += "found in option_key -> sub_option_key"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Invalid argument spec value type" = {
$spec = @{
apply_defaults = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: argument spec for 'apply_defaults' did not match expected "
$expected_msg += "type System.Boolean: actual type System.String"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Invalid argument spec option type" = {
$spec = @{
options = @{
option_key = @{
type = "invalid type"
}
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: type 'invalid type' is unsupported - found in option_key. "
$expected_msg += "Valid types are: bool, dict, float, int, json, list, path, raw, sid, str"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Invalid argument spec option element type" = {
$spec = @{
options = @{
option_key = @{
type = "list"
elements = "invalid type"
}
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: elements 'invalid type' is unsupported - found in option_key. "
$expected_msg += "Valid types are: bool, dict, float, int, json, list, path, raw, sid, str"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Spec required and default set at the same time" = {
$spec = @{
options = @{
option_key = @{
required = $true
default = "default value"
}
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: required and default are mutually exclusive for option_key"
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
("exception" -cin $actual.Keys) | Assert-Equals -Expected $true
}
"Unsupported options" = {
$spec = @{
options = @{
option_key = @{
type = "str"
}
}
}
$complex_args = @{
option_key = "abc"
invalid_key = "def"
another_key = "ghi"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "Unsupported parameters for (undefined win module) module: another_key, invalid_key. "
$expected_msg += "Supported parameters include: option_key"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Check mode and module doesn't support check mode" = {
$spec = @{
options = @{
option_key = @{
type = "str"
}
}
}
$complex_args = @{
_ansible_check_mode = $true
option_key = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "remote module (undefined win module) does not support check mode"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.skipped | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = @{option_key = "abc"}}
}
"Check mode with suboption without supports_check_mode" = {
$spec = @{
options = @{
sub_options = @{
# This tests the situation where a sub key doesn't set supports_check_mode, the logic in
# Ansible.Basic automatically sets that to $false and we want it to ignore it for a nested check
type = "dict"
options = @{
sub_option = @{ type = "str"; default = "value" }
}
}
}
supports_check_mode = $true
}
$complex_args = @{
_ansible_check_mode = $true
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$m.CheckMode | Assert-Equals -Expected $true
}
"Type conversion error" = {
$spec = @{
options = @{
option_key = @{
type = "int"
}
}
}
$complex_args = @{
option_key = "a"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "argument for option_key is of type System.String and we were unable to convert to int: "
$expected_msg += "Input string was not in a correct format."
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Type conversion error - delegate" = {
$spec = @{
options = @{
option_key = @{
type = "dict"
options = @{
sub_option_key = @{
type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) }
}
}
}
}
}
$complex_args = @{
option_key = @{
sub_option_key = "a"
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "argument for sub_option_key is of type System.String and we were unable to convert to delegate: "
$expected_msg += "Exception calling `"Parse`" with `"1`" argument(s): `"Input string was not in a correct format.`" "
$expected_msg += "found in option_key"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Numeric choices" = {
$spec = @{
options = @{
option_key = @{
choices = 1, 2, 3
type = "int"
}
}
}
$complex_args = @{
option_key = "2"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$output.Keys.Count | Assert-Equals -Expected 2
$output.changed | Assert-Equals -Expected $false
$output.invocation | Assert-DictionaryEquals -Expected @{module_args = @{option_key = 2}}
}
"Case insensitive choice" = {
$spec = @{
options = @{
option_key = @{
choices = "abc", "def"
}
}
}
$complex_args = @{
option_key = "ABC"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$expected_warning = "value of option_key was a case insensitive match of one of: abc, def. "
$expected_warning += "Checking of choices will be case sensitive in a future Ansible release. "
$expected_warning += "Case insensitive matches were: ABC"
$output.invocation | Assert-DictionaryEquals -Expected @{module_args = @{option_key = "ABC"}}
# We have disabled the warnings for now
#$output.warnings.Count | Assert-Equals -Expected 1
#$output.warnings[0] | Assert-Equals -Expected $expected_warning
}
"Case insensitive choice no_log" = {
$spec = @{
options = @{
option_key = @{
choices = "abc", "def"
no_log = $true
}
}
}
$complex_args = @{
option_key = "ABC"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$expected_warning = "value of option_key was a case insensitive match of one of: abc, def. "
$expected_warning += "Checking of choices will be case sensitive in a future Ansible release. "
$expected_warning += "Case insensitive matches were: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
$output.invocation | Assert-DictionaryEquals -Expected @{module_args = @{option_key = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"}}
# We have disabled the warnings for now
#$output.warnings.Count | Assert-Equals -Expected 1
#$output.warnings[0] | Assert-Equals -Expected $expected_warning
}
"Case insentitive choice as list" = {
$spec = @{
options = @{
option_key = @{
choices = "abc", "def", "ghi", "JKL"
type = "list"
elements = "str"
}
}
}
$complex_args = @{
option_key = "AbC", "ghi", "jkl"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$output = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$expected_warning = "value of option_key was a case insensitive match of one or more of: abc, def, ghi, JKL. "
$expected_warning += "Checking of choices will be case sensitive in a future Ansible release. "
$expected_warning += "Case insensitive matches were: AbC, jkl"
$output.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
# We have disabled the warnings for now
#$output.warnings.Count | Assert-Equals -Expected 1
#$output.warnings[0] | Assert-Equals -Expected $expected_warning
}
"Invalid choice" = {
$spec = @{
options = @{
option_key = @{
choices = "a", "b"
}
}
}
$complex_args = @{
option_key = "c"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "value of option_key must be one of: a, b. Got no match for: c"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Invalid choice with no_log" = {
$spec = @{
options = @{
option_key = @{
choices = "a", "b"
no_log = $true
}
}
}
$complex_args = @{
option_key = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "value of option_key must be one of: a, b. Got no match for: ********"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = @{option_key = "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"}}
}
"Invalid choice in list" = {
$spec = @{
options = @{
option_key = @{
choices = "a", "b"
type = "list"
}
}
}
$complex_args = @{
option_key = "a", "c"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "value of option_key must be one or more of: a, b. Got no match for: c"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Mutually exclusive options" = {
$spec = @{
options = @{
option1 = @{}
option2 = @{}
}
mutually_exclusive = @(,@("option1", "option2"))
}
$complex_args = @{
option1 = "a"
option2 = "b"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "parameters are mutually exclusive: option1, option2"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Missing required argument" = {
$spec = @{
options = @{
option1 = @{}
option2 = @{required = $true}
}
}
$complex_args = @{
option1 = "a"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "missing required arguments: option2"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Missing required argument subspec - no value defined" = {
$spec = @{
options = @{
option_key = @{
type = "dict"
options = @{
sub_option_key = @{
required = $true
}
}
}
}
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$actual.Keys.Count | Assert-Equals -Expected 2
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Missing required argument subspec" = {
$spec = @{
options = @{
option_key = @{
type = "dict"
options = @{
sub_option_key = @{
required = $true
}
another_key = @{}
}
}
}
}
$complex_args = @{
option_key = @{
another_key = "abc"
}
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "missing required arguments: sub_option_key found in option_key"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required together not set" = {
$spec = @{
options = @{
option1 = @{}
option2 = @{}
}
required_together = @(,@("option1", "option2"))
}
$complex_args = @{
option1 = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "parameters are required together: option1, option2"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required together not set - subspec" = {
$spec = @{
options = @{
option_key = @{
type = "dict"
options = @{
option1 = @{}
option2 = @{}
}
required_together = @(,@("option1", "option2"))
}
another_option = @{}
}
required_together = @(,@("option_key", "another_option"))
}
$complex_args = @{
option_key = @{
option1 = "abc"
}
another_option = "def"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "parameters are required together: option1, option2 found in option_key"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required one of not set" = {
$spec = @{
options = @{
option1 = @{}
option2 = @{}
option3 = @{}
}
required_one_of = @(@("option1", "option2"), @("option2", "option3"))
}
$complex_args = @{
option1 = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "one of the following is required: option2, option3"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required if invalid entries" = {
$spec = @{
options = @{
state = @{choices = "absent", "present"; default = "present"}
path = @{type = "path"}
}
required_if = @(,@("state", "absent"))
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "internal error: invalid required_if value count of 2, expecting 3 or 4 entries"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required if no missing option" = {
$spec = @{
options = @{
state = @{choices = "absent", "present"; default = "present"}
name = @{}
path = @{type = "path"}
}
required_if = @(,@("state", "absent", @("name", "path")))
}
$complex_args = @{
name = "abc"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$actual.Keys.Count | Assert-Equals -Expected 2
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required if missing option" = {
$spec = @{
options = @{
state = @{choices = "absent", "present"; default = "present"}
name = @{}
path = @{type = "path"}
}
required_if = @(,@("state", "absent", @("name", "path")))
}
$complex_args = @{
state = "absent"
name = "abc"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "state is absent but all of the following are missing: path"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required if missing option and required one is set" = {
$spec = @{
options = @{
state = @{choices = "absent", "present"; default = "present"}
name = @{}
path = @{type = "path"}
}
required_if = @(,@("state", "absent", @("name", "path"), $true))
}
$complex_args = @{
state = "absent"
}
$failed = $false
try {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 1"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$expected_msg = "state is absent but any of the following are missing: name, path"
$actual.Keys.Count | Assert-Equals -Expected 4
$actual.changed | Assert-Equals -Expected $false
$actual.failed | Assert-Equals -Expected $true
$actual.msg | Assert-Equals -Expected $expected_msg
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"Required if missing option but one required set" = {
$spec = @{
options = @{
state = @{choices = "absent", "present"; default = "present"}
name = @{}
path = @{type = "path"}
}
required_if = @(,@("state", "absent", @("name", "path"), $true))
}
$complex_args = @{
state = "absent"
name = "abc"
}
$m = [Ansible.Basic.AnsibleModule]::Create(@(), $spec)
$failed = $false
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$actual.Keys.Count | Assert-Equals -Expected 2
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = $complex_args}
}
"PS Object in return result" = {
$m = [Ansible.Basic.AnsibleModule]::Create(@(), @{})
# JavaScriptSerializer struggles with PS Object like PSCustomObject due to circular references, this test makes
# sure we can handle these types of objects without bombing
$m.Result.output = [PSCustomObject]@{a = "a"; b = "b"}
$failed = $true
try {
$m.ExitJson()
} catch [System.Management.Automation.RuntimeException] {
$failed = $true
$_.Exception.Message | Assert-Equals -Expected "exit: 0"
$actual = [Ansible.Basic.AnsibleModule]::FromJson($_.Exception.InnerException.Output)
}
$failed | Assert-Equals -Expected $true
$actual.Keys.Count | Assert-Equals -Expected 3
$actual.changed | Assert-Equals -Expected $false
$actual.invocation | Assert-DictionaryEquals -Expected @{module_args = @{}}
$actual.output | Assert-DictionaryEquals -Expected @{a = "a"; b = "b"}
}
"String json array to object" = {
$input_json = '["abc", "def"]'
$actual = [Ansible.Basic.AnsibleModule]::FromJson($input_json)
$actual -is [Array] | Assert-Equals -Expected $true
$actual.Length | Assert-Equals -Expected 2
$actual[0] | Assert-Equals -Expected "abc"
$actual[1] | Assert-Equals -Expected "def"
}
"String json array of dictionaries to object" = {
$input_json = '[{"abc":"def"}]'
$actual = [Ansible.Basic.AnsibleModule]::FromJson($input_json)
$actual -is [Array] | Assert-Equals -Expected $true
$actual.Length | Assert-Equals -Expected 1
$actual[0] | Assert-DictionaryEquals -Expected @{"abc" = "def"}
}
}
try {
foreach ($test_impl in $tests.GetEnumerator()) {
# Reset the variables before each test
$complex_args = @{}
$test = $test_impl.Key
&$test_impl.Value
}
$module.Result.data = "success"
} catch [System.Management.Automation.RuntimeException] {
$module.Result.failed = $true
$module.Result.test = $test
$module.Result.line = $_.InvocationInfo.ScriptLineNumber
$module.Result.method = $_.InvocationInfo.Line.Trim()
if ($_.Exception.Message.StartSwith("exit: ")) {
# The exception was caused by an unexpected Exit call, log that on the output
$module.Result.output = (ConvertFrom-Json -InputObject $_.Exception.InnerException.Output)
$module.Result.msg = "Uncaught AnsibleModule exit in tests, see output"
} else {
# Unrelated exception
$module.Result.exception = $_.Exception.ToString()
$module.Result.msg = "Uncaught exception: $(($_ | Out-String).ToString())"
}
}
Exit-Module
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,983 |
find_module: contains attribute ambiguously described
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
The contains attribute of the find_module has type "string" but is described as "one or more regex patterns". Given that a string can only be one regex pattern (albeit with alternations in it to match against multiple patterns) it is deceptively easy to misread this and try to specify a list of strings as the value of the contains attribute.
Granted, Ansible warns about this when running, but it would be even better to prevent the mistake in the first place.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
I would edit the documentation myself, but I don't actually know what the intended semantics for this are. Is the command meant to take a list of regexes? If so, the bug is that it doesn't. If it's only meant to take a single regex, the bug is in the description in the documentation!
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
find_module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/kqr/.vaultpass
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
All (documentation published on the web)
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61983
|
https://github.com/ansible/ansible/pull/62445
|
d4064a965b0fcba974c85bb49991f28ddf00cba4
|
2375fd099035fadb781c7bfdb8bc8ce18b5e0601
| 2019-09-09T09:17:21Z |
python
| 2019-09-26T15:33:14Z |
changelogs/fragments/find-contains-docs.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,983 |
find_module: contains attribute ambiguously described
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
The contains attribute of the find_module has type "string" but is described as "one or more regex patterns". Given that a string can only be one regex pattern (albeit with alternations in it to match against multiple patterns) it is deceptively easy to misread this and try to specify a list of strings as the value of the contains attribute.
Granted, Ansible warns about this when running, but it would be even better to prevent the mistake in the first place.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
I would edit the documentation myself, but I don't actually know what the intended semantics for this are. Is the command meant to take a list of regexes? If so, the bug is that it doesn't. If it's only meant to take a single regex, the bug is in the description in the documentation!
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
find_module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/kqr/.vaultpass
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
All (documentation published on the web)
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61983
|
https://github.com/ansible/ansible/pull/62445
|
d4064a965b0fcba974c85bb49991f28ddf00cba4
|
2375fd099035fadb781c7bfdb8bc8ce18b5e0601
| 2019-09-09T09:17:21Z |
python
| 2019-09-26T15:33:14Z |
lib/ansible/modules/files/find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Ruggero Marchei <[email protected]>
# Copyright: (c) 2015, Brian Coca <[email protected]>
# Copyright: (c) 2016-2017, Konstantin Shalygin <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: find
author: Brian Coca (@bcoca)
version_added: "2.0"
short_description: Return a list of files based on specific criteria
description:
- Return a list of files based on specific criteria. Multiple criteria are AND'd together.
- For Windows targets, use the M(win_find) module instead.
options:
age:
description:
- Select files whose age is equal to or greater than the specified time.
- Use a negative age to find files equal to or less than the specified time.
- You can choose seconds, minutes, hours, days, or weeks by specifying the
first letter of any of those words (e.g., "1w").
type: str
patterns:
default: '*'
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- The patterns restrict the list of files to be returned to those whose basenames match at
least one of the patterns specified. Multiple patterns can be specified using a list.
- The pattern is matched against the file base name, excluding the directory.
- When using regexen, the pattern MUST match the ENTIRE file name, not just parts of it. So
if you are looking to match all files ending in .default, you'd need to use '.*\.default'
as a regexp and not just '\.default'.
- This parameter expects a list, which can be either comma separated or YAML. If any of the
patterns contain a comma, make sure to put them in a list to avoid splitting the patterns
in undesirable ways.
type: list
aliases: [ pattern ]
excludes:
description:
- One or more (shell or regex) patterns, which type is controlled by C(use_regex) option.
- Items whose basenames match an C(excludes) pattern are culled from C(patterns) matches.
Multiple patterns can be specified using a list.
type: list
aliases: [ exclude ]
version_added: "2.5"
contains:
description:
- One or more regex patterns which should be matched against the file content.
type: str
paths:
description:
- List of paths of directories to search. All paths must be fully qualified.
type: list
required: true
aliases: [ name, path ]
file_type:
description:
- Type of file to select.
- The 'link' and 'any' choices were added in Ansible 2.3.
type: str
choices: [ any, directory, file, link ]
default: file
recurse:
description:
- If target is a directory, recursively descend into the directory looking for files.
type: bool
default: no
size:
description:
- Select files whose size is equal to or greater than the specified size.
- Use a negative size to find files equal to or less than the specified size.
- Unqualified values are in bytes but b, k, m, g, and t can be appended to specify
bytes, kilobytes, megabytes, gigabytes, and terabytes, respectively.
- Size is not evaluated for directories.
age_stamp:
description:
- Choose the file property against which we compare age.
type: str
choices: [ atime, ctime, mtime ]
default: mtime
hidden:
description:
- Set this to C(yes) to include hidden files, otherwise they will be ignored.
type: bool
default: no
follow:
description:
- Set this to C(yes) to follow symlinks in path for systems with python 2.6+.
type: bool
default: no
get_checksum:
description:
- Set this to C(yes) to retrieve a file's SHA1 checksum.
type: bool
default: no
use_regex:
description:
- If C(no), the patterns are file globs (shell).
- If C(yes), they are python regexes.
type: bool
default: no
depth:
description:
- Set the maximum number of levels to descend into.
- Setting recurse to C(no) will override this value, which is effectively depth 1.
- Default is unlimited depth.
type: int
version_added: "2.6"
seealso:
- module: win_find
'''
EXAMPLES = r'''
- name: Recursively find /tmp files older than 2 days
find:
paths: /tmp
age: 2d
recurse: yes
- name: Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
find:
paths: /tmp
age: 4w
size: 1m
recurse: yes
- name: Recursively find /var/tmp files with last access time greater than 3600 seconds
find:
paths: /var/tmp
age: 3600
age_stamp: atime
recurse: yes
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz
find:
paths: /var/log
patterns: '*.old,*.log.gz'
size: 10m
# Note that YAML double quotes require escaping backslashes but yaml single quotes do not.
- name: Find /var/log files equal or greater than 10 megabytes ending with .old or .log.gz via regex
find:
paths: /var/log
patterns: "^.*?\\.(?:old|log\\.gz)$"
size: 10m
use_regex: yes
- name: Find /var/log all directories, exclude nginx and mysql
find:
paths: /var/log
recurse: no
file_type: directory
excludes: 'nginx,mysql'
# When using patterns that contain a comma, make sure they are formatted as lists to avoid splitting the pattern
- name: Use a single pattern that contains a comma formatted as a list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns: ['^_[0-9]{2,4}_.*.log$']
- name: Use multiple patterns that contain a comma formatted as a YAML list
find:
paths: /var/log
file_type: file
use_regex: yes
patterns:
- '^_[0-9]{2,4}_.*.log$'
- '^[a-z]{1,5}_.*log$'
'''
RETURN = r'''
files:
description: All matches found with the specified criteria (see stat module for full output of each dictionary)
returned: success
type: list
sample: [
{ path: "/var/tmp/test1",
mode: "0644",
"...": "...",
checksum: 16fac7be61a6e4591a33ef4b729c5c3302307523
},
{ path: "/var/tmp/test2",
"...": "..."
},
]
matched:
description: Number of matches
returned: success
type: int
sample: 14
examined:
description: Number of filesystem objects looked at
returned: success
type: int
sample: 34
'''
import fnmatch
import grp
import os
import pwd
import re
import stat
import sys
import time
from ansible.module_utils.basic import AnsibleModule
def pfilter(f, patterns=None, excludes=None, use_regex=False):
'''filter using glob patterns'''
if patterns is None and excludes is None:
return True
if use_regex:
if patterns and excludes is None:
for p in patterns:
r = re.compile(p)
if r.match(f):
return True
elif patterns and excludes:
for p in patterns:
r = re.compile(p)
if r.match(f):
for e in excludes:
r = re.compile(e)
if r.match(f):
return False
return True
else:
if patterns and excludes is None:
for p in patterns:
if fnmatch.fnmatch(f, p):
return True
elif patterns and excludes:
for p in patterns:
if fnmatch.fnmatch(f, p):
for e in excludes:
if fnmatch.fnmatch(f, e):
return False
return True
return False
def agefilter(st, now, age, timestamp):
'''filter files older than age'''
if age is None:
return True
elif age >= 0 and now - st.__getattribute__("st_%s" % timestamp) >= abs(age):
return True
elif age < 0 and now - st.__getattribute__("st_%s" % timestamp) <= abs(age):
return True
return False
def sizefilter(st, size):
'''filter files greater than size'''
if size is None:
return True
elif size >= 0 and st.st_size >= abs(size):
return True
elif size < 0 and st.st_size <= abs(size):
return True
return False
def contentfilter(fsname, pattern):
"""
Filter files which contain the given expression
:arg fsname: Filename to scan for lines matching a pattern
:arg pattern: Pattern to look for inside of line
:rtype: bool
:returns: True if one of the lines in fsname matches the pattern. Otherwise False
"""
if pattern is None:
return True
prog = re.compile(pattern)
try:
with open(fsname) as f:
for line in f:
if prog.match(line):
return True
except Exception:
pass
return False
def statinfo(st):
pw_name = ""
gr_name = ""
try: # user data
pw_name = pwd.getpwuid(st.st_uid).pw_name
except Exception:
pass
try: # group data
gr_name = grp.getgrgid(st.st_gid).gr_name
except Exception:
pass
return {
'mode': "%04o" % stat.S_IMODE(st.st_mode),
'isdir': stat.S_ISDIR(st.st_mode),
'ischr': stat.S_ISCHR(st.st_mode),
'isblk': stat.S_ISBLK(st.st_mode),
'isreg': stat.S_ISREG(st.st_mode),
'isfifo': stat.S_ISFIFO(st.st_mode),
'islnk': stat.S_ISLNK(st.st_mode),
'issock': stat.S_ISSOCK(st.st_mode),
'uid': st.st_uid,
'gid': st.st_gid,
'size': st.st_size,
'inode': st.st_ino,
'dev': st.st_dev,
'nlink': st.st_nlink,
'atime': st.st_atime,
'mtime': st.st_mtime,
'ctime': st.st_ctime,
'gr_name': gr_name,
'pw_name': pw_name,
'wusr': bool(st.st_mode & stat.S_IWUSR),
'rusr': bool(st.st_mode & stat.S_IRUSR),
'xusr': bool(st.st_mode & stat.S_IXUSR),
'wgrp': bool(st.st_mode & stat.S_IWGRP),
'rgrp': bool(st.st_mode & stat.S_IRGRP),
'xgrp': bool(st.st_mode & stat.S_IXGRP),
'woth': bool(st.st_mode & stat.S_IWOTH),
'roth': bool(st.st_mode & stat.S_IROTH),
'xoth': bool(st.st_mode & stat.S_IXOTH),
'isuid': bool(st.st_mode & stat.S_ISUID),
'isgid': bool(st.st_mode & stat.S_ISGID),
}
def main():
module = AnsibleModule(
argument_spec=dict(
paths=dict(type='list', required=True, aliases=['name', 'path']),
patterns=dict(type='list', default=['*'], aliases=['pattern']),
excludes=dict(type='list', aliases=['exclude']),
contains=dict(type='str'),
file_type=dict(type='str', default="file", choices=['any', 'directory', 'file', 'link']),
age=dict(type='str'),
age_stamp=dict(type='str', default="mtime", choices=['atime', 'ctime', 'mtime']),
size=dict(type='str'),
recurse=dict(type='bool', default=False),
hidden=dict(type='bool', default=False),
follow=dict(type='bool', default=False),
get_checksum=dict(type='bool', default=False),
use_regex=dict(type='bool', default=False),
depth=dict(type='int'),
),
supports_check_mode=True,
)
params = module.params
filelist = []
if params['age'] is None:
age = None
else:
# convert age to seconds:
m = re.match(r"^(-?\d+)(s|m|h|d|w)?$", params['age'].lower())
seconds_per_unit = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800}
if m:
age = int(m.group(1)) * seconds_per_unit.get(m.group(2), 1)
else:
module.fail_json(age=params['age'], msg="failed to process age")
if params['size'] is None:
size = None
else:
# convert size to bytes:
m = re.match(r"^(-?\d+)(b|k|m|g|t)?$", params['size'].lower())
bytes_per_unit = {"b": 1, "k": 1024, "m": 1024**2, "g": 1024**3, "t": 1024**4}
if m:
size = int(m.group(1)) * bytes_per_unit.get(m.group(2), 1)
else:
module.fail_json(size=params['size'], msg="failed to process size")
now = time.time()
msg = ''
looked = 0
for npath in params['paths']:
npath = os.path.expanduser(os.path.expandvars(npath))
if os.path.isdir(npath):
''' ignore followlinks for python version < 2.6 '''
for root, dirs, files in (sys.version_info < (2, 6, 0) and os.walk(npath)) or os.walk(npath, followlinks=params['follow']):
if params['depth']:
depth = root.replace(npath.rstrip(os.path.sep), '').count(os.path.sep)
if files or dirs:
depth += 1
if depth > params['depth']:
del(dirs[:])
continue
looked = looked + len(files) + len(dirs)
for fsobj in (files + dirs):
fsname = os.path.normpath(os.path.join(root, fsobj))
if os.path.basename(fsname).startswith('.') and not params['hidden']:
continue
try:
st = os.lstat(fsname)
except Exception:
msg += "%s was skipped as it does not seem to be a valid file or it cannot be accessed\n" % fsname
continue
r = {'path': fsname}
if params['file_type'] == 'any':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
if stat.S_ISREG(st.st_mode) and params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISDIR(st.st_mode) and params['file_type'] == 'directory':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
elif stat.S_ISREG(st.st_mode) and params['file_type'] == 'file':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and \
agefilter(st, now, age, params['age_stamp']) and \
sizefilter(st, size) and contentfilter(fsname, params['contains']):
r.update(statinfo(st))
if params['get_checksum']:
r['checksum'] = module.sha1(fsname)
filelist.append(r)
elif stat.S_ISLNK(st.st_mode) and params['file_type'] == 'link':
if pfilter(fsobj, params['patterns'], params['excludes'], params['use_regex']) and agefilter(st, now, age, params['age_stamp']):
r.update(statinfo(st))
filelist.append(r)
if not params['recurse']:
break
else:
msg += "%s was skipped as it does not seem to be a valid directory or it cannot be accessed\n" % npath
matched = len(filelist)
module.exit_json(files=filelist, changed=False, msg=msg, matched=matched, examined=looked)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/dev_guide/developing_collections.rst
|
.. _developing_collections:
**********************
Developing collections
**********************
Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins.
You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. contents::
:local:
:depth: 2
.. _collection_structure:
Collection structure
====================
Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy
and other tools need in order to package, build and publish the collection::
collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.yml`` extensions for galaxy.yml.
* See the `draft collection <https://github.com/bcoca/collection>`_ for an example of a full collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
.. _galaxy_yml:
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact.
See :ref:`collections_galaxy_meta` for details.
.. _collections_doc_dir:
docs directory
---------------
Keep general documentation for the collection here. Plugins and modules still keep their specific documentation embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Currently we are looking at Markdown as the standard format for documentation files, but this is subject to change.
Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Ansible collection namespace is defined in the ``galaxy.yml`` file and is not equivalent to the GitHub repository name.
.. _collections_plugin_dir:
plugins directory
------------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by any other plugin by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play.
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is ``ansible_example``, the collection is ``community``.
In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is
``ansible_example.community.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FCQN is
``ansible_example.community.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _collections_roles_dir:
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the
above role name rules.
The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's
metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a
collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
TBD.
tests directory
----------------
TBD. Expect tests for the collection itself to reside here.
.. _creating_collections:
Creating collections
======================
To create a collection:
#. Initialize a collection with :ref:`ansible-galaxy collection init<creating_collections_skeleton>` to create the skeleton directory structure.
#. Add your content to the collection.
#. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`.
#. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`.
A user can then install your collection on their systems.
Currently the ``ansible-galaxy collection`` command implements the following sub commands:
* ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template.
* ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository.
* ``publish``: Publish a built collection artifact to Galaxy.
* ``install``: Install one or more collections.
To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page.
.. _creating_collections_skeleton:
Creating a collection skeleton
------------------------------
To start a new collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection init my_namespace.my_collection
Then you can populate the directories with the content you want inside the collection. See
https://github.com/bcoca/collection to get a better idea of what you can place inside a collection.
.. _building_collections:
Building collections
--------------------
To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build
This creates
a tarball of the built collection in the current directory which can be uploaded to Galaxy.::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
Certain files and folders are excluded when building the collection artifact. This is not currently configurable
and is a work in progress so the collection artifact may contain files you would not wish to distribute.
This tarball is mainly intended to upload to Galaxy
as a distribution method, but you can use it directly to install the collection on target systems.
.. _trying_collection_locally:
Trying collection locally
-------------------------
You can try your collection locally by installing it from the tarball.
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections/ansible_collections
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will expect to find collections when attempting to use them.
Then try to use the local collection inside a playbook, for more details see :ref:`Using collections <using_collections>`
.. _publishing_collections:
Publishing collections
----------------------
You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself.
.. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it.
.. _upload_collection_ansible_galaxy:
Upload using ansible-galaxy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload the collection artifact with the ``ansible-galaxy`` command:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET
The above command triggers an import process, just as if you uploaded the collection through the Galaxy website.
The command waits until the import process completes before reporting the status back. If you wish to continue
without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your
`My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
The API key is a secret token used by Ansible Galaxy to protect your content. You can find your API key at your
`Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page.
.. _upload_collection_galaxy:
Upload a collection from the Galaxy website
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection artifact directly on Galaxy:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the
namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the
namespace, the upload request will fail.
Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the
import process, including any errors or warnings about the metadata and content contained in the collection.
.. _collection_versions:
Collection versions
-------------------
Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before
uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number)
will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions.
Collection versions use `Sematic Versioning <https://semver.org/>`_ for version numbers. Please read the official documentation for details and examples. In summary:
* Increment major (for example: x in `x.y.z`) version number for an incompatible API change.
* Increment minor (for example: y in `x.y.z`) version number for new functionality in a backwards compatible manner.
* Increment patch (for example: z in `x.y.z`) version number for backwards compatible bug fixes.
.. _migrate_to_collection:
Migrating Ansible content to a collection
=========================================
You can experiment with migrating existing modules into a collection using the `content_collector tool <https://github.com/ansible/content_collector>`_. The ``content_collector`` is a playbook that helps you migrate content from an Ansible distribution into a collection.
.. warning::
This tool is in active development and is provided only for experimentation and feedback at this point.
See the `content_collector README <https://github.com/ansible/content_collector>`_ for full details and usage guidelines.
.. seealso::
:ref:`collections`
Learn how to install and use collections.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/dev_guide/developing_inventory.rst
|
.. _developing_inventory:
****************************
Developing dynamic inventory
****************************
.. contents:: Topics
:local:
As described in :ref:`dynamic_inventory`, Ansible can pull inventory information from dynamic sources,
including cloud sources, using the supplied :ref:`inventory plugins <inventory_plugins>`.
If the source you want is not currently covered by existing plugins, you can create your own as with any other plugin type.
In previous versions you had to create a script or program that can output JSON in the correct format when invoked with the proper arguments.
You can still use and write inventory scripts, as we ensured backwards compatibility via the :ref:`script inventory plugin <script_inventory>`
and there is no restriction on the programming language used.
If you choose to write a script, however, you will need to implement some features yourself.
i.e caching, configuration management, dynamic variable and group composition, etc.
While with :ref:`inventory plugins <inventory_plugins>` you can leverage the Ansible codebase to add these common features.
.. _inventory_sources:
Inventory sources
=================
Inventory sources are strings (i.e what you pass to ``-i`` in the command line),
they can represent a path to a file/script or just be the raw data for the plugin to use.
Here are some plugins and the type of source they use:
+--------------------------------------------+---------------------------------------+
| Plugin | Source |
+--------------------------------------------+---------------------------------------+
| :ref:`host list <host_list_inventory>` | A comma separated list of hosts |
+--------------------------------------------+---------------------------------------+
| :ref:`yaml <yaml_inventory>` | Path to a YAML format data file |
+--------------------------------------------+---------------------------------------+
| :ref:`constructed <constructed_inventory>` | Path to a YAML configuration file |
+--------------------------------------------+---------------------------------------+
| :ref:`ini <ini_inventory>` | Path to an INI formatted data file |
+--------------------------------------------+---------------------------------------+
| :ref:`virtualbox <virtualbox_inventory>` | Path to a YAML configuration file |
+--------------------------------------------+---------------------------------------+
| :ref:`script plugin <script_inventory>` | Path to an executable outputting JSON |
+--------------------------------------------+---------------------------------------+
.. _developing_inventory_inventory_plugins:
Inventory plugins
=================
Like most plugin types (except modules) they must be developed in Python, since they execute on the controller they should match the same requirements :ref:`control_node_requirements`.
Most of the documentation in :ref:`developing_plugins` also applies here, so as to not repeat ourselves, you should read that document first and we'll include inventory plugin specifics next.
Inventory plugins normally only execute at the start of a run, before playbooks/plays and roles are loaded,
but they can be 're-executed' via the ``meta: refresh_inventory`` task, which will clear out the existing inventory and rebuild it.
When using the 'persistent' cache, inventory plugins can also use the configured cache plugin to store and retrieve data to avoid costly external calls.
.. _developing_an_inventory_plugin:
Developing an inventory plugin
------------------------------
The first thing you want to do is use the base class:
.. code-block:: python
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
NAME = 'myplugin' # used internally by Ansible, it should match the file name but not required
This class has a couple of methods each plugin should implement and a few helpers for parsing the inventory source and updating the inventory.
After you have the basic plugin working you might want to to incorporate other features by adding more base classes:
.. code-block:: python
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'myplugin'
For the bulk of the work in the plugin, We mostly want to deal with 2 methods ``verify_file`` and ``parse``.
.. _inventory_plugin_verify_file:
verify_file
^^^^^^^^^^^
This method is used by Ansible to make a quick determination if the inventory source is usable by the plugin. It does not need to be 100% accurate as there might be overlap in what plugins can handle and Ansible will try the enabled plugins (in order) by default.
.. code-block:: python
def verify_file(self, path):
''' return true/false if this is possibly a valid file for this plugin to consume '''
valid = False
if super(InventoryModule, self).verify_file(path):
# base class verifies that file exists and is readable by current user
if path.endswith(('virtualbox.yaml', 'virtualbox.yml', 'vbox.yaml', 'vbox.yml')):
valid = True
return valid
In this case, from the :ref:`virtualbox inventory plugin <virtualbox_inventory>`, we screen for specific file name patterns to avoid attempting to consume any valid yaml file. You can add any type of condition here, but the most common one is 'extension matching'. If you implement extension matching for YAML configuration files the path suffix <plugin_name>.<yml|yaml> should be accepted. All valid extensions should be documented in the plugin description.
Another example that actually does not use a 'file' but the inventory source string itself,
from the :ref:`host list <host_list_inventory>` plugin:
.. code-block:: python
def verify_file(self, path):
''' don't call base class as we don't expect a path, but a host list '''
host_list = path
valid = False
b_path = to_bytes(host_list, errors='surrogate_or_strict')
if not os.path.exists(b_path) and ',' in host_list:
# the path does NOT exist and there is a comma to indicate this is a 'host list'
valid = True
return valid
This method is just to expedite the inventory process and avoid unnecessary parsing of sources that are easy to filter out before causing a parse error.
.. _inventory_plugin_parse:
parse
^^^^^
This method does the bulk of the work in the plugin.
It takes the following parameters:
* inventory: inventory object with existing data and the methods to add hosts/groups/variables to inventory
* loader: Ansible's DataLoader. The DataLoader can read files, auto load JSON/YAML and decrypt vaulted data, and cache read files.
* path: string with inventory source (this is usually a path, but is not required)
* cache: indicates whether the plugin should use or avoid caches (cache plugin and/or loader)
The base class does some minimal assignment for reuse in other methods.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
self.loader = loader
self.inventory = inventory
self.templar = Templar(loader=loader)
It is up to the plugin now to deal with the inventory source provided and translate that into the Ansible inventory.
To facilitate this, the example below uses a few helper functions:
.. code-block:: python
NAME = 'myplugin'
def parse(self, inventory, loader, path, cache=True):
# call base method to ensure properties are available for use with other helper methods
super(InventoryModule, self).parse(inventory, loader, path, cache)
# this method will parse 'common format' inventory sources and
# update any options declared in DOCUMENTATION as needed
config = self._read_config_data(path)
# if NOT using _read_config_data you should call set_options directly,
# to process any defined configuration for this plugin,
# if you don't define any options you can skip
#self.set_options()
# example consuming options from inventory source
mysession = apilib.session(user=self.get_option('api_user'),
password=self.get_option('api_pass'),
server=self.get_option('api_server')
)
# make requests to get data to feed into inventory
mydata = mysession.getitall()
#parse data and create inventory objects:
for colo in mydata:
for server in mydata[colo]['servers']:
self.inventory.add_host(server['name'])
self.inventory.set_variable(server['name'], 'ansible_host', server['external_ip'])
The specifics will vary depending on API and structure returned. But one thing to keep in mind, if the inventory source or any other issue crops up you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed.
For examples on how to implement an inventory plugin, see the source code here:
`lib/ansible/plugins/inventory <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory>`_.
.. _inventory_plugin_caching:
inventory cache
^^^^^^^^^^^^^^^
Extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class to have the caching system at your disposal.
.. code-block:: yaml
extends_documentation_fragment:
- inventory_cache
.. code-block:: python
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'myplugin'
Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``.
.. code-block:: python
NAME = 'myplugin'
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
Before using the cache, retrieve a unique cache key using the ``get_cache_key`` method. This needs to be done by all inventory modules using the cache, so you don't use/overwrite other parts of the cache.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
cache_key = self.get_cache_key(path)
Now that you've enabled caching, loaded the correct plugin, and retrieved a unique cache key, you can set up the flow of data between the cache and your inventory using the ``cache`` parameter of the ``parse`` method. This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via ``--flush-cache`` or the meta task ``refresh_inventory``). Although the cache shouldn't be used to populate the inventory when being refreshed, the cache should be updated with the new inventory if the user has enabled caching. You can use ``self._cache`` like a dictionary. The following pattern allows refreshing the inventory to work in conjunction with caching.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
cache_key = self.get_cache_key(path)
# cache may be True or False at this point to indicate if the inventory is being refreshed
# get the user's cache option too to see if we should save the cache if it is changing
user_cache_setting = self.get_option('cache')
# read if the user has caching enabled and the cache isn't being refreshed
attempt_to_read_cache = user_cache_setting and cache
# update if the user has caching enabled and the cache is being refreshed; update this value to True if the cache has expired below
cache_needs_update = user_cache_setting and not cache
# attempt to read the cache if inventory isn't being refreshed and the user has caching enabled
if attempt_to_read_cache:
try:
results = self._cache[cache_key]
except KeyError:
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
cache_needs_update = True
if cache_needs_updates:
results = self.get_inventory()
# set the cache
self._cache[cache_key] = results
self.populate(results)
After the ``parse`` method is complete, the contents of ``self._cache`` is used to set the cache plugin if the contents of the cache have changed.
You have three other cache methods available:
- ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache`` before the ``parse`` method completes
- ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified before the ``parse`` method completes
- ``clear_cache`` deletes the keys in ``self._cache`` from your cache plugin
.. _inventory_source_common_format:
Inventory source common format
------------------------------
To simplify development, most plugins use a mostly standard configuration file as the inventory source, YAML based and with just one required field ``plugin`` which should contain the name of the plugin that is expected to consume the file.
Depending on other common features used, other fields might be needed, but each plugin can also add its own custom options as needed.
For example, if you use the integrated caching, ``cache_plugin``, ``cache_timeout`` and other cache related fields could be present.
.. _inventory_development_auto:
The 'auto' plugin
-----------------
Since Ansible 2.5, we include the :ref:`auto inventory plugin <auto_inventory>` enabled by default, which itself just loads other plugins if they use the common YAML configuration format that specifies a ``plugin`` field that matches an inventory plugin name, this makes it easier to use your plugin w/o having to update configurations.
.. _inventory_scripts:
.. _developing_inventory_scripts:
Inventory scripts
=================
Even though we now have inventory plugins, we still support inventory scripts, not only for backwards compatibility but also to allow users to leverage other programming languages.
.. _inventory_script_conventions:
Inventory script conventions
----------------------------
Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments, other arguments are allowed but Ansible will not use them.
They might still be useful for when executing the scripts directly.
When the script is called with the single argument ``--list``, the script must output to stdout a JSON-encoded hash or
dictionary containing all of the groups to be managed.
Each group's value should be either a hash or dictionary containing a list of each host, any child groups,
and potential group variables, or simply a list of hosts::
{
"group001": {
"hosts": ["host001", "host002"],
"vars": {
"var1": true
},
"children": ["group002"]
},
"group002": {
"hosts": ["host003","host004"],
"vars": {
"var2": 500
},
"children":[]
}
}
If any of the elements of a group are empty they may be omitted from the output.
When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print either an empty JSON hash/dictionary, or a hash/dictionary of variables to make available to templates and playbooks. For example::
{
"VAR001": "VALUE",
"VAR002": "VALUE",
}
Printing variables is optional. If the script does not do this, it should print an empty hash or dictionary.
.. _inventory_script_tuning:
Tuning the external inventory script
------------------------------------
.. versionadded:: 1.3
The stock inventory script system detailed above works for all versions of Ansible,
but calling ``--host`` for every host can be rather inefficient,
especially if it involves API calls to a remote subsystem.
To avoid this inefficiency, if the inventory script returns a top level element called "_meta",
it is possible to return all of the host variables in one script execution.
When this meta element contains a value for "hostvars",
the inventory script will not be invoked with ``--host`` for each host.
This results in a significant performance increase for large numbers of hosts.
The data to be added to the top level JSON dictionary looks like this::
{
# results of inventory script as above go here
# ...
"_meta": {
"hostvars": {
"host001": {
"var001" : "value"
},
"host002": {
"var002": "value"
}
}
}
}
To satisfy the requirements of using ``_meta``, to prevent ansible from calling your inventory with ``--host`` you must at least populate ``_meta`` with an empty ``hostvars`` dictionary.
For example::
{
# results of inventory script as above go here
# ...
"_meta": {
"hostvars": {}
}
}
.. _replacing_inventory_ini_with_dynamic_provider:
If you intend to replace an existing static inventory file with an inventory script,
it must return a JSON object which contains an 'all' group that includes every
host in the inventory as a member and every group in the inventory as a child.
It should also include an 'ungrouped' group which contains all hosts which are not members of any other group.
A skeleton example of this JSON object is:
.. code-block:: json
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"children": [
]
}
}
An easy way to see how this should look is using :ref:`ansible-inventory`, which also supports ``--list`` and ``--host`` parameters like an inventory script would.
.. seealso::
:ref:`developing_api`
Python API to Playbooks and Ad Hoc Task Execution
:ref:`developing_modules_general`
Get started with developing a module
:ref:`developing_plugins`
How to develop plugins
`Ansible Tower <https://www.ansible.com/products/tower>`_
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
`Development Mailing List <https://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/dev_guide/developing_plugins.rst
|
.. _developing_plugins:
.. _plugin_guidelines:
******************
Developing plugins
******************
.. contents::
:local:
Plugins augment Ansible's core functionality with logic and features that are accessible to all modules. Ansible ships with a number of handy plugins, and you can easily write your own. All plugins must:
* be written in Python
* raise errors
* return strings in unicode
* conform to Ansible's configuration and documentation standards
Once you've reviewed these general guidelines, you can skip to the particular type of plugin you want to develop.
Writing plugins in Python
=========================
You must write your plugin in Python so it can be loaded by the ``PluginLoader`` and returned as a Python object that any module can use. Since your plugin will execute on the controller, you must write it in a :ref:`compatible version of Python <control_node_requirements>`.
Raising errors
==============
You should return errors encountered during plugin execution by raising ``AnsibleError()`` or a similar class with a message describing the error. When wrapping other exceptions into error messages, you should always use the ``to_native`` Ansible function to ensure proper string compatibility across Python versions:
.. code-block:: python
from ansible.module_utils._text import to_native
try:
cause_an_exception()
except Exception as e:
raise AnsibleError('Something happened, this was original exception: %s' % to_native(e))
Check the different `AnsibleError objects <https://github.com/ansible/ansible/blob/devel/lib/ansible/errors/__init__.py>`_ and see which one applies best to your situation.
String encoding
===============
You must convert any strings returned by your plugin into Python's unicode type. Converting to unicode ensures that these strings can run through Jinja2. To convert strings:
.. code-block:: python
from ansible.module_utils._text import to_text
result_string = to_text(result_string)
Plugin configuration & documentation standards
==============================================
To define configurable options for your plugin, describe them in the ``DOCUMENTATION`` section of the python file. Callback and connection plugins have declared configuration requirements this way since Ansible version 2.4; most plugin types now do the same. This approach ensures that the documentation of your plugin's options will always be correct and up-to-date. To add a configurable option to your plugin, define it in this format:
.. code-block:: yaml
options:
option_name:
description: describe this config option
default: default value for this config option
env:
- name: NAME_OF_ENV_VAR
ini:
- section: section_of_ansible.cfg_where_this_config_option_is_defined
key: key_used_in_ansible.cfg
required: True/False
type: boolean/float/integer/list/none/path/pathlist/pathspec/string/tmppath
version_added: X.x
To access the configuration settings in your plugin, use ``self.get_option(<option_name>)``. For most plugin types, the controller pre-populates the settings. If you need to populate settings explicitly, use a ``self.set_options()`` call.
Plugins that support embedded documentation (see :ref:`ansible-doc` for the list) must include well-formed doc strings to be considered for merge into the Ansible repo. If you inherit from a plugin, you must document the options it takes, either via a documentation fragment or as a copy. See :ref:`module_documenting` for more information on correct documentation. Thorough documentation is a good idea even if you're developing a plugin for local use.
Developing particular plugin types
==================================
.. _developing_actions:
Action plugins
--------------
Action plugins let you integrate local processing and local data with module functionality.
To create an action plugin, create a new class with the Base(ActionBase) class as the parent:
.. code-block:: python
from ansible.plugins.action import ActionBase
class ActionModule(ActionBase):
pass
From there, execute the module using the ``_execute_module`` method to call the original module.
After successful execution of the module, you can modify the module return data.
.. code-block:: python
module_return = self._execute_module(module_name='<NAME_OF_MODULE>',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
For example, if you wanted to check the time difference between your Ansible controller and your target machine(s), you could write an action plugin to check the local time and compare it to the return data from Ansible's ``setup`` module:
.. code-block:: python
#!/usr/bin/python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.plugins.action import ActionBase
from datetime import datetime
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
super(ActionModule, self).run(tmp, task_vars)
module_args = self._task.args.copy()
module_return = self._execute_module(module_name='setup',
module_args=module_args,
task_vars=task_vars, tmp=tmp)
ret = dict()
remote_date = None
if not module_return.get('failed'):
for key, value in module_return['ansible_facts'].items():
if key == 'ansible_date_time':
remote_date = value['iso8601']
if remote_date:
remote_date_obj = datetime.strptime(remote_date, '%Y-%m-%dT%H:%M:%SZ')
time_delta = datetime.now() - remote_date_obj
ret['delta_seconds'] = time_delta.seconds
ret['delta_days'] = time_delta.days
ret['delta_microseconds'] = time_delta.microseconds
return dict(ansible_facts=dict(ret))
This code checks the time on the controller, captures the date and time for the remote machine using the ``setup`` module, and calculates the difference between the captured time and
the local time, returning the time delta in days, seconds and microseconds.
For practical examples of action plugins,
see the source code for the `action plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/action>`_
.. _developing_cache_plugins:
Cache plugins
-------------
Cache plugins store gathered facts and data retrieved by inventory plugins.
Import cache plugins using the cache_loader so you can use ``self.set_options()`` and ``self.get_option(<option_name>)``. If you import a cache plugin directly in the code base, you can only access options via ``ansible.constants``, and you break the cache plugin's ability to be used by an inventory plugin.
.. code-block:: python
from ansible.plugins.loader import cache_loader
[...]
plugin = cache_loader.get('custom_cache', **cache_kwargs)
There are two base classes for cache plugins, ``BaseCacheModule`` for database-backed caches, and ``BaseCacheFileModule`` for file-backed caches.
To create a cache plugin, start by creating a new ``CacheModule`` class with the appropriate base class. If you're creating a plugin using an ``__init__`` method you should initialize the base class with any provided args and kwargs to be compatible with inventory plugin cache options. The base class calls ``self.set_options(direct=kwargs)``. After the base class ``__init__`` method is called ``self.get_option(<option_name>)`` should be used to access cache options.
New cache plugins should take the options ``_uri``, ``_prefix``, and ``_timeout`` to be consistent with existing cache plugins.
.. code-block:: python
from ansible.plugins.cache import BaseCacheModule
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
super(CacheModule, self).__init__(*args, **kwargs)
self._connection = self.get_option('_uri')
self._prefix = self.get_option('_prefix')
self._timeout = self.get_option('_timeout')
If you use the ``BaseCacheModule``, you must implement the methods ``get``, ``contains``, ``keys``, ``set``, ``delete``, ``flush``, and ``copy``. The ``contains`` method should return a boolean that indicates if the key exists and has not expired. Unlike file-based caches, the ``get`` method does not raise a KeyError if the cache has expired.
If you use the ``BaseFileCacheModule``, you must implement ``_load`` and ``_dump`` methods that will be called from the base class methods ``get`` and ``set``.
If your cache plugin stores JSON, use ``AnsibleJSONEncoder`` in the ``_dump`` or ``set`` method and ``AnsibleJSONDecoder`` in the ``_load`` or ``get`` method.
For example cache plugins, see the source code for the `cache plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/cache>`_.
.. _developing_callbacks:
Callback plugins
----------------
Callback plugins add new behaviors to Ansible when responding to events. By default, callback plugins control most of the output you see when running the command line programs.
To create a callback plugin, create a new class with the Base(Callbacks) class as the parent:
.. code-block:: python
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
pass
From there, override the specific methods from the CallbackBase that you want to provide a callback for.
For plugins intended for use with Ansible version 2.0 and later, you should only override methods that start with ``v2``.
For a complete list of methods that you can override, please see ``__init__.py`` in the
`lib/ansible/plugins/callback <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_ directory.
The following is a modified example of how Ansible's timer plugin is implemented,
but with an extra option so you can see how configuration works in Ansible version 2.4 and later:
.. code-block:: python
# Make coding more python3-ish, this is required for contributions to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# not only visible to ansible-doc, it also 'declares' the options the plugin requires and how to configure them.
DOCUMENTATION = '''
callback: timer
callback_type: aggregate
requirements:
- whitelist in configuration
short_description: Adds time to play stats
version_added: "2.0"
description:
- This callback just adds total play duration to the play stats.
options:
format_string:
description: format of the string shown to user at play end
ini:
- section: callback_timer
key: format_string
env:
- name: ANSIBLE_CALLBACK_TIMER_FORMAT
default: "Playbook run took %s days, %s hours, %s minutes, %s seconds"
'''
from datetime import datetime
from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
"""
This callback module tells you how long your plays ran for.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'timer'
# only needed if you ship it and don't want to enable by default
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
# make sure the expected objects are present, calling the base's __init__
super(CallbackModule, self).__init__()
# start the timer when the plugin is loaded, the first play should start a few milliseconds after.
self.start_time = datetime.now()
def _days_hours_minutes_seconds(self, runtime):
''' internal helper method for this callback '''
minutes = (runtime.seconds // 60) % 60
r_seconds = runtime.seconds - (minutes * 60)
return runtime.days, runtime.seconds // 3600, minutes, r_seconds
# this is only event we care about for display, when the play shows its summary stats; the rest are ignored by the base class
def v2_playbook_on_stats(self, stats):
end_time = datetime.now()
runtime = end_time - self.start_time
# Shows the usage of a config option declared in the DOCUMENTATION variable. Ansible will have set it when it loads the plugin.
# Also note the use of the display object to print to screen. This is available to all callbacks, and you should use this over printing yourself
self._display.display(self._plugin_options['format_string'] % (self._days_hours_minutes_seconds(runtime)))
Note that the ``CALLBACK_VERSION`` and ``CALLBACK_NAME`` definitions are required for properly functioning plugins for Ansible version 2.0 and later. ``CALLBACK_TYPE`` is mostly needed to distinguish 'stdout' plugins from the rest, since you can only load one plugin that writes to stdout.
For example callback plugins, see the source code for the `callback plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback>`_
.. _developing_connection_plugins:
Connection plugins
------------------
Connection plugins allow Ansible to connect to the target hosts so it can execute tasks on them. Ansible ships with many connection plugins, but only one can be used per host at a time. The most commonly used connection plugins are the ``paramiko`` SSH, native ssh (just called ``ssh``), and ``local`` connection types. All of these can be used in playbooks and with ``/usr/bin/ansible`` to connect to remote machines.
Ansible version 2.1 introduced the ``smart`` connection plugin. The ``smart`` connection type allows Ansible to automatically select either the ``paramiko`` or ``openssh`` connection plugin based on system capabilities, or the ``ssh`` connection plugin if OpenSSH supports ControlPersist.
To create a new connection plugin (for example, to support SNMP, Message bus, or other transports), copy the format of one of the existing connection plugins and drop it into ``connection`` directory on your :ref:`local plugin path <local_plugins>`.
For example connection plugins, see the source code for the `connection plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/connection>`_.
.. _developing_filter_plugins:
Filter plugins
--------------
Filter plugins manipulate data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the filter plugins shipped with Ansible reside in a ``core.py``.
Filter plugins do not use the standard configuration and documentation system described above.
For example filter plugins, see the source code for the `filter plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/filter>`_.
.. _developing_inventory_plugins:
Inventory plugins
-----------------
Inventory plugins parse inventory sources and form an in-memory representation of the inventory. Inventory plugins were added in Ansible version 2.4.
You can see the details for inventory plugins in the :ref:`developing_inventory` page.
.. _developing_lookup_plugins:
Lookup plugins
--------------
Lookup plugins pull in data from external data stores. Lookup plugins can be used within playbooks both for looping --- playbook language constructs like ``with_fileglob`` and ``with_items`` are implemented via lookup plugins --- and to return values into a variable or parameter.
Lookup plugins are very flexible, allowing you to retrieve and return any type of data. When writing lookup plugins, always return data of a consistent type that can be easily consumed in a playbook. Avoid parameters that change the returned data type. If there is a need to return a single value sometimes and a complex dictionary other times, write two different lookup plugins.
Ansible includes many :ref:`filters <playbooks_filters>` which can be used to manipulate the data returned by a lookup plugin. Sometimes it makes sense to do the filtering inside the lookup plugin, other times it is better to return results that can be filtered in the playbook. Keep in mind how the data will be referenced when determining the appropriate level of filtering to be done inside the lookup plugin.
Here's a simple lookup plugin implementation --- this lookup returns the contents of a text file as a variable:
.. code-block:: python
# python 3 headers, required if submitting to Ansible
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
lookup: file
author: Daniel Hokka Zakrisson <[email protected]>
version_added: "0.9"
short_description: read file contents
description:
- This lookup returns the contents from a file on the Ansible controller's file system.
options:
_terms:
description: path(s) of files to read
required: True
notes:
- if read in variable context, the file can be interpreted as YAML if the content is valid to the parser.
- this lookup does not understand globing --- use the fileglob lookup instead.
"""
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
display = Display()
class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
# lookups in general are expected to both take a list as input and output a list
# this is done so they work with the looping construct 'with_'.
ret = []
for term in terms:
display.debug("File lookup term: %s" % term)
# Find the file in the expected search path, using a class method
# that implements the 'expected' search path for Ansible plugins.
lookupfile = self.find_file_in_search_path(variables, 'files', term)
# Don't use print or your own logging, the display class
# takes care of it in a unified way.
display.vvvv(u"File lookup using %s as file" % lookupfile)
try:
if lookupfile:
contents, show_data = self._loader._get_file_contents(lookupfile)
ret.append(contents.rstrip())
else:
# Always use ansible error classes to throw 'final' exceptions,
# so the Ansible engine will know how to deal with them.
# The Parser error indicates invalid options passed
raise AnsibleParserError()
except AnsibleParserError:
raise AnsibleError("could not locate file in lookup: %s" % term)
return ret
The following is an example of how this lookup is called::
---
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug:
msg: the value of foo.txt is {{ contents }} as seen today {{ lookup('pipe', 'date +"%Y-%m-%d"') }}
For example lookup plugins, see the source code for the `lookup plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/lookup>`_.
For more usage examples of lookup plugins, see :ref:`Using Lookups<playbooks_lookups>`.
.. _developing_test_plugins:
Test plugins
------------
Test plugins verify data. They are a feature of Jinja2 and are also available in Jinja2 templates used by the ``template`` module. As with all plugins, they can be easily extended, but instead of having a file for each one you can have several per file. Most of the test plugins shipped with Ansible reside in a ``core.py``. These are specially useful in conjunction with some filter plugins like ``map`` and ``select``; they are also available for conditional directives like ``when:``.
Test plugins do not use the standard configuration and documentation system described above.
For example test plugins, see the source code for the `test plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/test>`_.
.. _developing_vars_plugins:
Vars plugins
------------
Vars plugins inject additional variable data into Ansible runs that did not come from an inventory source, playbook, or command line. Playbook constructs like 'host_vars' and 'group_vars' work using vars plugins.
Vars plugins were partially implemented in Ansible 2.0 and rewritten to be fully implemented starting with Ansible 2.4.
Older plugins used a ``run`` method as their main body/work:
.. code-block:: python
def run(self, name, vault_password=None):
pass # your code goes here
Ansible 2.0 did not pass passwords to older plugins, so vaults were unavailable.
Most of the work now happens in the ``get_vars`` method which is called from the VariableManager when needed.
.. code-block:: python
def get_vars(self, loader, path, entities):
pass # your code goes here
The parameters are:
* loader: Ansible's DataLoader. The DataLoader can read files, auto-load JSON/YAML and decrypt vaulted data, and cache read files.
* path: this is 'directory data' for every inventory source and the current play's playbook directory, so they can search for data in reference to them. ``get_vars`` will be called at least once per available path.
* entities: these are host or group names that are pertinent to the variables needed. The plugin will get called once for hosts and again for groups.
This ``get vars`` method just needs to return a dictionary structure with the variables.
Since Ansible version 2.4, vars plugins only execute as needed when preparing to execute a task. This avoids the costly 'always execute' behavior that occurred during inventory construction in older versions of Ansible.
For example vars plugins, see the source code for the `vars plugins included with Ansible Core
<https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/vars>`_.
.. seealso::
:ref:`all_modules`
List of all modules
:ref:`developing_api`
Learn about the Python API for task execution
:ref:`developing_inventory`
Learn about how to develop dynamic inventory sources
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/plugins/cache.rst
|
.. _cache_plugins:
Cache Plugins
=============
.. contents::
:local:
:depth: 2
Cache plugin implement a backend caching mechanism that allows Ansible to store gathered facts or inventory source data
without the performance hit of retrieving them from source.
The default cache plugin is the :ref:`memory <memory_cache>` plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs.
You can use a separate cache plugin for inventory and facts. If an inventory-specific cache plugin is not provided and inventory caching is enabled, the fact cache plugin is used for inventory.
.. _enabling_cache:
Enabling Fact Cache Plugins
---------------------------
Only one fact cache plugin can be active at a time.
You can enable a cache plugin in the Ansible configuration, either via environment variable:
.. code-block:: shell
export ANSIBLE_CACHE_PLUGIN=jsonfile
or in the ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching=redis
You will also need to configure other settings specific to each plugin. Consult the individual plugin documentation
or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
A custom cache plugin is enabled by dropping it into a ``cache_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
Enabling Inventory Cache Plugins
--------------------------------
Inventory may be cached using a file-based cache plugin (like jsonfile). Check the specific inventory plugin to see if it supports caching.
If an inventory-specific cache plugin is not specified Ansible will fall back to caching inventory with the fact cache plugin options.
The inventory cache is disabled by default. You may enable it via environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE=True
or in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
cache=True
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache: True
Similarly with fact cache plugins, only one inventory cache plugin can be active at a time and may be set via environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE_PLUGIN=jsonfile
or in the ansible.cfg file:
.. code-block:: ini
[inventory]
cache_plugin=jsonfile
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache_plugin: jsonfile
Consult the individual inventory plugin documentation or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
.. _using_cache:
Using Cache Plugins
-------------------
Cache plugins are used automatically once they are enabled.
.. _cache_plugin_list:
Plugin List
-----------
You can use ``ansible-doc -t cache -l`` to see the list of available plugins.
Use ``ansible-doc -t cache <plugin name>`` to see specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
cache/*
.. seealso::
:ref:`action_plugins`
Ansible Action plugins
:ref:`callback_plugins`
Ansible callback plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`shell_plugins`
Ansible Shell plugins
:ref:`strategy_plugins`
Ansible Strategy plugins
:ref:`vars_plugins`
Ansible Vars plugins
`User Mailing List <https://groups.google.com/forum/#!forum/ansible-devel>`_
Have a question? Stop by the google group!
`webchat.freenode.net <https://webchat.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/plugins/callback.rst
|
.. _callback_plugins:
Callback Plugins
================
.. contents::
:local:
:depth: 2
Callback plugins enable adding new behaviors to Ansible when responding to events.
By default, callback plugins control most of the output you see when running the command line programs,
but can also be used to add additional output, integrate with other tools and marshall the events to a storage backend.
.. _callback_examples:
Example callback plugins
------------------------
The :ref:`log_plays <log_plays_callback>` callback is an example of how to record playbook events to a log file,
and the :ref:`mail <mail_callback>` callback sends email on playbook failures.
The :ref:`say <say_callback>` callback responds with computer synthesized speech in relation to playbook events.
.. _enabling_callbacks:
Enabling callback plugins
-------------------------
You can activate a custom callback by either dropping it into a ``callback_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the callback directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
Plugins are loaded in alphanumeric order. For example, a plugin implemented in a file named `1_first.py` would run before a plugin file named `2_second.py`.
Most callbacks shipped with Ansible are disabled by default and need to be whitelisted in your :ref:`ansible.cfg <ansible_configuration_settings>` file in order to function. For example:
.. code-block:: ini
#callback_whitelist = timer, mail, profile_roles
Setting a callback plugin for ``ansible-playbook``
--------------------------------------------------
You can only have one plugin be the main manager of your console output. If you want to replace the default, you should define CALLBACK_TYPE = stdout in the subclass and then configure the stdout plugin in :ref:`ansible.cfg <ansible_configuration_settings>`. For example:
.. code-block:: ini
stdout_callback = dense
or for my custom callback:
.. code-block:: ini
stdout_callback = mycallback
This only affects :ref:`ansible-playbook` by default.
Setting a callback plugin for ad-hoc commands
---------------------------------------------
The :ref:`ansible` ad hoc command specifically uses a different callback plugin for stdout,
so there is an extra setting in :ref:`ansible_configuration_settings` you need to add to use the stdout callback defined above:
.. code-block:: ini
[defaults]
bin_ansible_callbacks=True
You can also set this as an environment variable:
.. code-block:: shell
export ANSIBLE_LOAD_CALLBACK_PLUGINS=1
.. _callback_plugin_list:
Plugin list
-----------
You can use ``ansible-doc -t callback -l`` to see the list of available plugins.
Use ``ansible-doc -t callback <plugin name>`` to see specific documents and examples.
.. toctree:: :maxdepth: 1
:glob:
callback/*
.. seealso::
:ref:`action_plugins`
Ansible Action plugins
:ref:`cache_plugins`
Ansible cache plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`shell_plugins`
Ansible Shell plugins
:ref:`strategy_plugins`
Ansible Strategy plugins
:ref:`vars_plugins`
Ansible Vars plugins
`User Mailing List <https://groups.google.com/forum/#!forum/ansible-devel>`_
Have a question? Stop by the google group!
`webchat.freenode.net <https://webchat.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,231 |
Unclear how to use inventory plugins in collections
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
I've been unable to get inventory plugins working inside of collections. The inventory plugin in ansible/ansible always takes precedence.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
collections
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I've got a inventory.gcp.yml file that I run with `ansible-inventory --list -i inventory.gcp.yml`. I've got a collection called google.cloud installed into my collections path. Here are the various things I've tried.
```yaml
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin.
```yaml
collections:
- google.cloud
plugin: gcp_compute
...
```
Result: Runs the ansible/ansible inventory plugin
```yaml
plugin: google.cloud.gcp_compute
...
```
Result: Plugin not found, errors out.
(I've also tried with `google.cloud.plugins.inventory.gcp_compute` and `google.cloud.inventory.gcp_compute` to no avail)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62231
|
https://github.com/ansible/ansible/pull/62465
|
de515522b69e09f3c274068f48e69187702139c8
|
d41050b28b2d3e8d4a97ce13344b04fc87f703a2
| 2019-09-12T22:44:12Z |
python
| 2019-09-26T16:00:26Z |
docs/docsite/rst/plugins/inventory.rst
|
.. _inventory_plugins:
Inventory Plugins
=================
.. contents::
:local:
:depth: 2
Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either via the ``-i /path/to/file`` and/or ``-i 'host1, host2'`` command line parameters or from other configuration sources.
.. _enabling_inventory:
Enabling inventory plugins
--------------------------
Most inventory plugins shipped with Ansible are disabled by default and need to be whitelisted in your
:ref:`ansible.cfg <ansible_configuration_settings>` file in order to function. This is how the default whitelist looks in the
config file that ships with Ansible:
.. code-block:: ini
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
This list also establishes the order in which each plugin tries to parse an inventory source. Any plugins left out of the list will not be considered, so you can 'optimize' your inventory loading by minimizing it to what you actually use. For example:
.. code-block:: ini
[inventory]
enable_plugins = advanced_host_list, constructed, yaml
.. _using_inventory:
Using inventory plugins
-----------------------
The only requirement for using an inventory plugin after it is enabled is to provide an inventory source to parse.
Ansible will try to use the list of enabled inventory plugins, in order, against each inventory source provided.
Once an inventory plugin succeeds at parsing a source, any remaining inventory plugins will be skipped for that source.
To start using an inventory plugin with a YAML configuration source, create a file with the accepted filename schema for the plugin in question, then add ``plugin: plugin_name``. Each plugin documents any naming restrictions. For example, the aws_ec2 inventory plugin has to end with ``aws_ec2.(yml|yaml)``
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
Or for the openstack plugin the file has to be called ``clouds.yml`` or ``openstack.(yml|yaml)``:
.. code-block:: yaml
# clouds.yml or openstack.(yml|yaml)
plugin: openstack
The ``auto`` inventory plugin is enabled by default and works by using the ``plugin`` field to indicate the plugin that should attempt to parse it. You can configure the whitelist/precedence of inventory plugins used to parse source using the `ansible.cfg` ['inventory'] ``enable_plugins`` list. After enabling the plugin and providing any required options, you can view the populated inventory with ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@ungrouped:
You can set the default inventory path (via ``inventory`` in the `ansible.cfg` [defaults] section or the :envvar:`ANSIBLE_INVENTORY` environment variable) to your inventory source(s). Now running ``ansible-inventory --graph`` should yield the same output as when you passed your YAML configuration source(s) directly. You can add custom inventory plugins to your plugin path to use in the same way.
Your inventory source might be a directory of inventory configuration files. The constructed inventory plugin only operates on those hosts already in inventory, so you may want the constructed inventory configuration parsed at a particular point (such as last). Ansible parses the directory recursively, alphabetically. You cannot configure the parsing approach, so name your files to make it work predictably. Inventory plugins that extend constructed features directly can work around that restriction by adding constructed options in addition to the inventory plugin options. Otherwise, you can use ``-i`` with multiple sources to impose a specific order, e.g. ``-i demo.aws_ec2.yml -i clouds.yml -i constructed.yml``.
You can create dynamic groups using host variables with the constructed ``keyed_groups`` option. The option ``groups`` can also be used to create groups and ``compose`` creates and modifies host variables. Here is an aws_ec2 example utilizing constructed features:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
regions:
- us-east-1
- us-east-2
keyed_groups:
# add hosts to tag_Name_value groups for each aws_ec2 host's tags.Name variable
- key: tags.Name
prefix: tag_Name_
separator: ""
groups:
# add hosts to the group development if any of the dictionary's keys or values is the word 'devel'
development: "'devel' in (tags|list)"
compose:
# set the ansible_host variable to connect with the private IP address without changing the hostname
ansible_host: private_ip_address
Now the output of ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
| |--...
|--@development:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_ECS_Instance:
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_Test_Server:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
|--@ungrouped
If a host does not have the variables in the configuration above (i.e. ``tags.Name``, ``tags``, ``private_ip_address``), the host will not be added to groups other than those that the inventory plugin creates and the ``ansible_host`` host variable will not be modified.
If an inventory plugin supports caching, you can enable and set caching options for an individual YAML configuration source or for multiple inventory sources using environment variables or Ansible configuration files. If you enable caching for an inventory plugin without providing inventory-specific caching options, the inventory plugin will use fact-caching options. Here is an example of enabling caching for an individual YAML configuration file:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: aws_ec2
cache: yes
cache_plugin: jsonfile
cache_timeout: 7200
cache_connection: /tmp/aws_inventory
cache_prefix: aws_ec2
Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts
cache_timeout = 3600
[inventory]
cache = yes
cache_connection = /tmp/ansible_inventory
.. _inventory_plugin_list:
Plugin List
-----------
You can use ``ansible-doc -t inventory -l`` to see the list of available plugins.
Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
inventory/*
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`callback_plugins`
Ansible callback plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`playbooks_filters`
Jinja2 filter plugins
:ref:`playbooks_tests`
Jinja2 test plugins
:ref:`playbooks_lookups`
Jinja2 lookup plugins
:ref:`vars_plugins`
Ansible vars plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/committer_guidelines.rst
|
.. _community_committer_guidelines:
*********************
Committers Guidelines
*********************
These are the guidelines for people with commit privileges on the Ansible GitHub repository. Committers are essentially acting as members of the Ansible Core team, although not necessarily as employees of Ansible and Red Hat. Please read the guidelines before you commit.
These guidelines apply to everyone. At the same time, this ISN'T a process document. So just use good judgement. You've been given commit access because we trust your judgement.
That said, use the trust wisely.
If you abuse the trust and break components and builds, etc., the trust level falls and you may be asked not to commit or you may lose your commit privileges.
Features, high-level design, and roadmap
========================================
As a core team member, you are an integral part of the team that develops the :ref:`roadmap <roadmaps>`. Please be engaged, and push for the features and fixes that you want to see. Also keep in mind that Red Hat, as a company, will commit to certain features, fixes, APIs, etc. for various releases. Red Hat, the company, and the Ansible team must get these committed features (etc.) completed and released as scheduled. Obligations to users, the community, and customers must come first. Because of these commitments, a feature you want to develop yourself may not get into a release if it impacts a lot of other parts within Ansible.
Any other new features and changes to high level design should go through the proposal process (TBD), to ensure the community and core team have had a chance to review the idea and approve it. The core team has sole responsibility for merging new features based on proposals.
Our workflow on GitHub
======================
As a committer, you may already know this, but our workflow forms a lot of our team policies. Please ensure you're aware of the following workflow steps:
* Fork the repository upon which you want to do some work to your own personal repository
* Work on the specific branch upon which you need to commit
* Create a Pull Request back to the Ansible repository and tag the people you would like to review; assign someone as the primary "owner" of your request
* Adjust code as necessary based on the Comments provided
* Ask someone on the Core Team to do a final review and merge
Addendum to workflow for committers:
------------------------------------
The Core Team is aware that this can be a difficult process at times. Sometimes, the team breaks the rules by making direct commits or merging their own PRs. This section is a set of guidelines. If you're changing a comma in a doc, or making a very minor change, you can use your best judgement. This is another trust thing. The process is critical for any major change, but for little things or getting something done quickly, use your best judgement and make sure people on the team are aware of your work.
Roles on Core
=============
* Core committers: Fine to do PRs for most things, but we should have a timebox. Hanging PRs may merge on the judgement of these devs.
* :ref:`Module maintainers <maintainers>`: Module maintainers own specific modules and have indirect commit access via the current module PR mechanisms.
General rules
=============
Individuals with direct commit access to ansible/ansible are entrusted with powers that allow them to do a broad variety of things--probably more than we can write down. Rather than rules, treat these as general *guidelines*, individuals with this power are expected to use their best judgement.
* Don't
- Commit directly.
- Merge your own PRs. Someone else should have a chance to review and approve the PR merge. If you are a Core Committer, you have a small amount of leeway here for very minor changes.
- Forget about alternate environments. Consider the alternatives--yes, people have bad environments, but they are the ones who need us the most.
- Drag your community team members down. Always discuss the technical merits, but you should never address the person's limitations (you can later go for beers and call them idiots, but not in IRC/GitHub/etc.).
- Forget about the maintenance burden. Some things are really cool to have, but they might not be worth shoehorning in if the maintenance burden is too great.
- Break playbooks. Always keep backwards compatibility in mind.
- Forget to keep it simple. Complexity breeds all kinds of problems.
* Do
- Squash, avoid merges whenever possible, use GitHub's squash commits or cherry pick if needed (bisect thanks you).
- Be active. Committers who have no activity on the project (through merges, triage, commits, etc.) will have their permissions suspended.
- Consider backwards compatibility (goes back to "don't break existing playbooks").
- Write tests. PRs with tests are looked at with more priority than PRs without tests that should have them included. While not all changes require tests, be sure to add them for bug fixes or functionality changes.
- Discuss with other committers, specially when you are unsure of something.
- Document! If your PR is a new feature or a change to behavior, make sure you've updated all associated documentation or have notified the right people to do so. It also helps to add the version of Core against which this documentation is compatible (to avoid confusion with stable versus devel docs, for backwards compatibility, etc.).
- Consider scope, sometimes a fix can be generalized
- Keep it simple, then things are maintainable, debuggable and intelligible.
Committers are expected to continue to follow the same community and contribution guidelines followed by the rest of the Ansible community.
People
======
Individuals who've been asked to become a part of this group have generally been contributing in significant ways to the Ansible community for some time. Should they agree, they are requested to add their names and GitHub IDs to this file, in the section below, via a pull request. Doing so indicates that these individuals agree to act in the ways that their fellow committers trust that they will act.
+---------------------+----------------------+--------------------+----------------------+
| Name | GitHub ID | IRC Nick | Other |
+=====================+======================+====================+======================+
| James Cammarata | jimi-c | jimi | |
+---------------------+----------------------+--------------------+----------------------+
| Brian Coca | bcoca | bcoca | |
+---------------------+----------------------+--------------------+----------------------+
| Matt Davis | nitzmahone | nitzmahone | |
+---------------------+----------------------+--------------------+----------------------+
| Toshio Kuratomi | abadger | abadger1999 | |
+---------------------+----------------------+--------------------+----------------------+
| Jason McKerr | mckerrj | newtMcKerr | |
+---------------------+----------------------+--------------------+----------------------+
| Robyn Bergeron | robynbergeron | rbergeron | |
+---------------------+----------------------+--------------------+----------------------+
| Greg DeKoenigsberg | gregdek | gregdek | |
+---------------------+----------------------+--------------------+----------------------+
| Monty Taylor | emonty | mordred | |
+---------------------+----------------------+--------------------+----------------------+
| Matt Martz | sivel | sivel | |
+---------------------+----------------------+--------------------+----------------------+
| Nate Case | qalthos | Qalthos | |
+---------------------+----------------------+--------------------+----------------------+
| James Tanner | jctanner | jtanner | |
+---------------------+----------------------+--------------------+----------------------+
| Peter Sprygada | privateip | privateip | |
+---------------------+----------------------+--------------------+----------------------+
| Abhijit Menon-Sen | amenonsen | crab | |
+---------------------+----------------------+--------------------+----------------------+
| Michael Scherer | mscherer | misc | |
+---------------------+----------------------+--------------------+----------------------+
| René Moser | resmo | resmo | |
+---------------------+----------------------+--------------------+----------------------+
| David Shrewsbury | Shrews | Shrews | |
+---------------------+----------------------+--------------------+----------------------+
| Sandra Wills | docschick | docschick | |
+---------------------+----------------------+--------------------+----------------------+
| Graham Mainwaring | ghjm | | |
+---------------------+----------------------+--------------------+----------------------+
| Chris Houseknecht | chouseknecht | | |
+---------------------+----------------------+--------------------+----------------------+
| Trond Hindenes | trondhindenes | | |
+---------------------+----------------------+--------------------+----------------------+
| Jon Hawkesworth | jhawkesworth | jhawkesworth | |
+---------------------+----------------------+--------------------+----------------------+
| Will Thames | willthames | willthames | |
+---------------------+----------------------+--------------------+----------------------+
| Adrian Likins | alikins | alikins | |
+---------------------+----------------------+--------------------+----------------------+
| Dag Wieers | dagwieers | dagwieers | [email protected] |
+---------------------+----------------------+--------------------+----------------------+
| Tim Rupp | caphrim007 | caphrim007 | |
+---------------------+----------------------+--------------------+----------------------+
| Sloane Hertel | s-hertel | shertel | |
+---------------------+----------------------+--------------------+----------------------+
| Sam Doran | samdoran | samdoran | |
+---------------------+----------------------+--------------------+----------------------+
| Matt Clay | mattclay | mattclay | |
+---------------------+----------------------+--------------------+----------------------+
| Martin Krizek | mkrizek | mkrizek | |
+---------------------+----------------------+--------------------+----------------------+
| Ganesh Nalawade | ganeshrn | ganeshrn | |
+---------------------+----------------------+--------------------+----------------------+
| Trishna Guha | trishnaguha | trishnag | |
+---------------------+----------------------+--------------------+----------------------+
| Andrew Gaffney | agaffney | agaffney | |
+---------------------+----------------------+--------------------+----------------------+
| Jordan Borean | jborean93 | jborean93 | |
+---------------------+----------------------+--------------------+----------------------+
| Abhijeet Kasurde | Akasurde | akasurde | |
+---------------------+----------------------+--------------------+----------------------+
| Adam Miller | maxamillion | maxamillion | |
+---------------------+----------------------+--------------------+----------------------+
| Sviatoslav Sydorenko| webknjaz | webknjaz | |
+---------------------+----------------------+--------------------+----------------------+
| Alicia Cozine | acozine | acozine | |
+---------------------+----------------------+--------------------+----------------------+
| Sandra McCann | samccann | samccann | |
+---------------------+----------------------+--------------------+----------------------+
| Felix Fontein | felixfontein | felixfontein | [email protected] |
+---------------------+----------------------+--------------------+----------------------+
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/development_process.rst
|
.. _community_development_process:
*****************************
The Ansible Development Cycle
*****************************
The Ansible development cycle happens on two levels. At a macro level, the team plans releases and tracks progress with roadmaps and projects. At a micro level, each PR has its own lifecycle.
.. contents::
:local:
Macro development: roadmaps, releases, and projects
===================================================
If you want to follow the conversation about what features will be added to Ansible for upcoming releases and what bugs are being fixed, you can watch these resources:
* the :ref:`roadmaps`
* the :ref:`Ansible Release Schedule <release_and_maintenance>`
* various GitHub `projects <https://github.com/ansible/ansible/projects>`_ - for example:
* the `2.8 release project <https://github.com/ansible/ansible/projects/30>`_
* the `network bugs project <https://github.com/ansible/ansible/projects/20>`_
* the `core documentation project <https://github.com/ansible/ansible/projects/27>`_
.. _community_pull_requests:
Micro development: the lifecycle of a PR
========================================
Ansible accepts code via **pull requests** ("PRs" for short). GitHub provides a great overview of `how the pull request process works <https://help.github.com/articles/about-pull-requests/>`_ in general. The ultimate goal of any pull request is to get merged and become part of Ansible Core.
Here's an overview of the PR lifecycle:
* Contributor opens a PR
* Ansibot reviews the PR
* Ansibot assigns labels
* Ansibot pings maintainers
* Shippable runs the test suite
* Developers, maintainers, community review the PR
* Contributor addresses any feedback from reviewers
* Developers, maintainers, community re-review
* PR merged or closed
Automated PR review: ansibullbot
--------------------------------
Because Ansible receives many pull requests, and because we love automating things, we've automated several steps of the process of reviewing and merging pull requests with a tool called Ansibullbot, or Ansibot for short.
`Ansibullbot <https://github.com/ansible/ansibullbot/blob/master/ISSUE_HELP.md>`_ serves many functions:
- Responds quickly to PR submitters to thank them for submitting their PR
- Identifies the community maintainer responsible for reviewing PRs for any files affected
- Tracks the current status of PRs
- Pings responsible parties to remind them of any PR actions for which they may be responsible
- Provides maintainers with the ability to move PRs through the workflow
- Identifies PRs abandoned by their submitters so that we can close them
- Identifies modules abandoned by their maintainers so that we can find new maintainers
Ansibot workflow
^^^^^^^^^^^^^^^^
Ansibullbot runs continuously. You can generally expect to see changes to your issue or pull request within thirty minutes. Ansibullbot examines every open pull request in the repositories, and enforces state roughly according to the following workflow:
- If a pull request has no workflow labels, it's considered **new**. Files in the pull request are identified, and the maintainers of those files are pinged by the bot, along with instructions on how to review the pull request. (Note: sometimes we strip labels from a pull request to "reboot" this process.)
- If the module maintainer is not ``$team_ansible``, the pull request then goes into the **community_review** state.
- If the module maintainer is ``$team_ansible``, the pull request then goes into the **core_review** state (and probably sits for a while).
- If the pull request is in **community_review** and has received comments from the maintainer:
- If the maintainer says ``shipit``, the pull request is labeled **shipit**, whereupon the Core team assesses it for final merge.
- If the maintainer says ``needs_info``, the pull request is labeled **needs_info** and the submitter is asked for more info.
- If the maintainer says **needs_revision**, the pull request is labeled **needs_revision** and the submitter is asked to fix some things.
- If the submitter says ``ready_for_review``, the pull request is put back into **community_review** or **core_review** and the maintainer is notified that the pull request is ready to be reviewed again.
- If the pull request is labeled **needs_revision** or **needs_info** and the submitter has not responded lately:
- The submitter is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending action**, and the issue or pull request will be closed two weeks after that.
- If the submitter responds at all, the clock is reset.
- If the pull request is labeled **community_review** and the reviewer has not responded lately:
- The reviewer is first politely pinged after two weeks, pinged again after two more weeks and labeled **pending_action**, and then may be reassigned to ``$team_ansible`` or labeled **core_review**, or often the submitter of the pull request is asked to step up as a maintainer.
- If Shippable tests fail, or if the code is not able to be merged, the pull request is automatically put into **needs_revision** along with a message to the submitter explaining why.
There are corner cases and frequent refinements, but this is the workflow in general.
PR labels
^^^^^^^^^
There are two types of PR Labels generally: **workflow** labels and **information** labels.
Workflow labels
"""""""""""""""
- **community_review**: Pull requests for modules that are currently awaiting review by their maintainers in the Ansible community.
- **core_review**: Pull requests for modules that are currently awaiting review by their maintainers on the Ansible Core team.
- **needs_info**: Waiting on info from the submitter.
- **needs_rebase**: Waiting on the submitter to rebase.
- **needs_revision**: Waiting on the submitter to make changes.
- **shipit**: Waiting for final review by the core team for potential merge.
Information labels
""""""""""""""""""
- **backport**: this is applied automatically if the PR is requested against any branch that is not devel. The bot immediately assigns the labels backport and ``core_review``.
- **bugfix_pull_request**: applied by the bot based on the templatized description of the PR.
- **cloud**: applied by the bot based on the paths of the modified files.
- **docs_pull_request**: applied by the bot based on the templatized description of the PR.
- **easyfix**: applied manually, inconsistently used but sometimes useful.
- **feature_pull_request**: applied by the bot based on the templatized description of the PR.
- **networking**: applied by the bot based on the paths of the modified files.
- **owner_pr**: largely deprecated. Formerly workflow, now informational. Originally, PRs submitted by the maintainer would automatically go to **shipit** based on this label. If the submitter is also a maintainer, we notify the other maintainers and still require one of the maintainers (including the submitter) to give a **shipit**.
- **pending_action**: applied by the bot to PRs that are not moving. Reviewed every couple of weeks by the community team, who tries to figure out the appropriate action (closure, asking for new maintainers, etc).
Special Labels
""""""""""""""
- **new_plugin**: this is for new modules or plugins that are not yet in Ansible.
**Note:** `new_plugin` kicks off a completely separate process, and frankly it doesn't work very well at present. We're working our best to improve this process.
Human PR review
---------------
After Ansibot reviews the PR and applies labels, the PR is ready for human review. The most likely reviewers for any PR are the maintainers for the module that PR modifies.
Each module has at least one assigned :ref:`maintainer <maintainers>`, listed in the `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_ file.
The maintainer's job is to review PRs that affect that module and decide whether they should be merged (``shipit``) or revised (``needs_revision``). We'd like to have at least one community maintainer for every module. If a module has no community maintainers assigned, the maintainer is listed as ``$team_ansible``.
Once a human applies the ``shipit`` label, the :ref:`committers <community_committer_guidelines>` decide whether the PR is ready to be merged. Not every PR that gets the ``shipit`` label is actually ready to be merged, but the better our reviewers are, and the better our guidelines are, the more likely it will be that a PR that reaches **shipit** will be mergeable.
Making your PR merge-worthy
===========================
We don't merge every PR. Here are some tips for making your PR useful, attractive, and merge-worthy.
.. _community_changelogs:
Changelogs
----------
Changelogs help users and developers keep up with changes to Ansible.
Ansible builds a changelog for each release from fragments. You **must** add a changelog fragment to any PR that changes functionality or fixes a bug.
You don't have to add a changelog fragment for PRs that add new
modules and plugins, because our tooling does that for you automatically.
We build short summary changelogs for minor releases as well as for major releases. If you backport a bugfix, include a changelog fragment with the backport PR.
.. _changelogs_how_to:
Creating a changelog fragment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A basic changelog fragment is a ``.yaml`` file placed in the
``changelogs/fragments/`` directory. Each file contains a yaml dict with
keys like ``bugfixes`` or ``major_changes`` followed by a list of
changelog entries of bugfixes or features. Each changelog entry is
rst embedded inside of the yaml file which means that certain
constructs would need to be escaped so they can be interpreted by rst
and not by yaml (or escaped for both yaml and rst if that's your
desire). Each PR **must** use a new fragment file rather than adding to
an existing one, so we can trace the change back to the PR that introduced it.
To create a changelog entry, create a new file with a unique name in the ``changelogs/fragments/`` directory. The file name should include the PR number and a description of the change. It must end with the file extension ``.yaml``. For example: ``40696-user-backup-shadow-file.yaml``
A single changelog fragment may contain multiple sections but most will only contain one section.
The toplevel keys (bugfixes, major_changes, etc) are defined in the
`config file <https://github.com/ansible/ansible/blob/devel/changelogs/config.yaml>`_ for our release note tool. Here are the valid sections and a description of each:
**major_changes**
Major changes to Ansible itself. Generally does not include module or plugin changes.
**minor_changes**
Minor changes to Ansible, modules, or plugins. This includes new features, new parameters added to modules, or behavior changes to existing parameters.
**deprecated_features**
Features that have been deprecated and are scheduled for removal in a future release.
**removed_features**
Features that were previously deprecated and are now removed.
**bugfixes**
Fixes that resolve issues. If there is a specific issue related to this bugfix, add a link in the changelog entry.
**known_issues**
Known issues that are currently not fixed or will not be fixed.
Most changelog entries will be ``bugfixes`` or ``minor_changes``. When writing a changelog entry that pertains to a particular module, start the entry with ``- [module name] -`` and include a link to the related issue if one exists.
Here are some examples:
.. code-block:: yaml
bugfixes:
- win_updates - fixed issue where running win_updates on async fails without any error
.. code-block:: yaml
minor_changes:
- lineinfile - add warning when using an empty regexp (https://github.com/ansible/ansible/issues/29443)
.. code-block:: yaml
bugfixes:
- copy module - The copy module was attempting to change the mode of files for
remote_src=True even if mode was not set as a parameter. This failed on
filesystems which do not have permission bits.
You can find more example changelog fragments in the `changelog directory <https://github.com/ansible/ansible/tree/stable-2.6/changelogs/fragments>`_ for the 2.6 release. You can also find documentation of the format, including hints on embedding rst in the yaml, in the `reno documentation <https://docs.openstack.org/reno/latest/user/usage.html#editing-a-release-note>`_.
Once you've written the changelog fragment for your PR, commit the file and include it with the pull request.
.. _backport_process:
Backporting merged PRs
======================
All Ansible PRs must be merged to the ``devel`` branch first.
After a pull request has been accepted and merged to the ``devel`` branch, the following instructions will help you create a
pull request to backport the change to a previous stable branch.
We do **not** backport features.
.. note::
These instructions assume that:
* ``stable-2.8`` is the targeted release branch for the backport
* ``https://github.com/ansible/ansible.git`` is configured as a
``git remote`` named ``upstream``. If you do not use
a ``git remote`` named ``upstream``, adjust the instructions accordingly.
* ``https://github.com/<yourgithubaccount>/ansible.git``
is configured as a ``git remote`` named ``origin``. If you do not use
a ``git remote`` named ``origin``, adjust the instructions accordingly.
#. Prepare your devel, stable, and feature branches:
::
git fetch upstream
git checkout -b backport/2.8/[PR_NUMBER_FROM_DEVEL] upstream/stable-2.8
#. Cherry pick the relevant commit SHA from the devel branch into your feature
branch, handling merge conflicts as necessary:
::
git cherry-pick -x [SHA_FROM_DEVEL]
#. Add a :ref:`changelog fragment <changelogs_how_to>` for the change, and commit it.
#. Push your feature branch to your fork on GitHub:
::
git push origin backport/2.8/[PR_NUMBER_FROM_DEVEL]
#. Submit the pull request for ``backport/2.8/[PR_NUMBER_FROM_DEVEL]``
against the ``stable-2.8`` branch
#. The Release Manager will decide whether to merge the backport PR before
the next minor release. There isn't any need to follow up. Just ensure that the automated
tests (CI) are green.
.. note::
The choice to use ``backport/2.8/[PR_NUMBER_FROM_DEVEL]`` as the
name for the feature branch is somewhat arbitrary, but conveys meaning
about the purpose of that branch. It is not required to use this format,
but it can be helpful, especially when making multiple backport PRs for
multiple stable branches.
.. note::
If you prefer, you can use CPython's cherry-picker tool
(``pip install --user 'cherry-picker >= 1.3.2'``) to backport commits
from devel to stable branches in Ansible. Take a look at the `cherry-picker
documentation <https://pypi.org/p/cherry-picker#cherry-picking>`_ for
details on installing, configuring, and using it.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/documentation_contributions.rst
|
.. _community_documentation_contributions:
*****************************************
Contributing to the Ansible Documentation
*****************************************
Ansible has a lot of documentation and a small team of writers. Community support helps us keep up with new features, fixes, and changes.
Improving the documentation is an easy way to make your first contribution to the Ansible project. You don't have to be a programmer, since our documentation is written in YAML (module documentation) or `reStructuredText <http://docutils.sourceforge.net/rst.html>`_ (rST). If you're using Ansible, you already use YAML in your playbooks. And rST is mostly just text. You don't even need git experience, if you use the ``Edit on GitHub`` option.
If you find a typo, a broken example, a missing topic, or any other error or omission on this documentation website, let us know. Here are some ways to support Ansible documentation:
.. contents::
:local:
Editing docs directly on GitHub
===============================
For typos and other quick fixes, you can edit the documentation right from the site. Look at the top right corner of this page. That ``Edit on GitHub`` link is available on every page in the documentation. If you have a GitHub account, you can submit a quick and easy pull request this way.
To submit a documentation PR from docs.ansible.com with ``Edit on GitHub``:
#. Click on ``Edit on GitHub``.
#. If you don't already have a fork of the ansible repo on your GitHub account, you'll be prompted to create one.
#. Fix the typo, update the example, or make whatever other change you have in mind.
#. Enter a commit message in the first rectangle under the heading ``Propose file change`` at the bottom of the GitHub page. The more specific, the better. For example, "fixes typo in my_module description". You can put more detail in the second rectangle if you like. Leave the ``+label: docsite_pr`` there.
#. Submit the suggested change by clicking on the green "Propose file change" button. GitHub will handle branching and committing for you, and open a page with the heading "Comparing Changes".
#. Click on ``Create pull request`` to open the PR template.
#. Fill out the PR template, including as much detail as appropriate for your change. You can change the title of your PR if you like (by default it's the same as your commit message). In the ``Issue Type`` section, delete all lines except the ``Docs Pull Request`` line.
#. Submit your change by clicking on ``Create pull request`` button.
#. Be patient while Ansibot, our automated script, adds labels, pings the docs maintainers, and kicks off a CI testing run.
#. Keep an eye on your PR - the docs team may ask you for changes.
Reviewing open PRs and issues
=============================
You can also contribute by reviewing open documentation `issues <https://github.com/ansible/ansible/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Adocs>`_ and `PRs <https://github.com/ansible/ansible/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+label%3Adocs>`_. To add a helpful review, please:
- Include a comment - "looks good to me" only helps if we know why.
- For issues, reproduce the problem.
- For PRs, test the change.
Opening a new issue and/or PR
=============================
If the problem you've noticed is too complex to fix with the ``Edit on GitHub`` option, and no open issue or PR already documents the problem, please open an issue and/or a PR on the ``ansible/ansible`` repo.
A great documentation GitHub issue or PR includes:
- a specific title
- a detailed description of the problem (even for a PR - it's hard to evaluate a suggested change unless we know what problem it's meant to solve)
- links to other information (related issues/PRs, external documentation, pages on docs.ansible.com, etc.)
Before you open a complex documentation PR
==========================================
If you make multiple changes to the documentation, or add more than a line to it, before you open a pull request, please:
#. Check that your text follows our :ref:`style_guide`.
#. Test your changes for rST errors.
#. Build the page, and preferably the entire documentation site, locally.
To work with documentation on your local machine, you need to have python-3.5 or greater and the
following packages installed:
- gcc
- jinja2
- libyaml
- Pygments >= 2.4.0
- pyparsing
- PyYAML
- rstcheck
- six
- sphinx
- sphinx-notfound-page
- straight.plugin
.. note::
On macOS with Xcode, you may need to install ``six`` and ``pyparsing`` with ``--ignore-installed`` to get versions that work wth ``sphinx``.
.. _testing_documentation_locally:
Testing the documentation locally
---------------------------------
To test an individual file for rST errors:
.. code-block:: bash
rstcheck changed_file.rst
Building the documentation locally
----------------------------------
Building the documentation is the best way to check for errors and review your changes. Once `rstcheck` runs with no errors, navigate to ``ansible/docs/docsite`` and then build the page(s) you want to review.
Building a single rST page
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build a single rST file with the make utility:
.. code-block:: bash
make htmlsingle rst=path/to/your_file.rst
For example:
.. code-block:: bash
make htmlsingle rst=community/documentation_contributions.rst
This process compiles all the links but provides minimal log output. If you're writing a new page or want more detailed log output, refer to the instructions on :ref:`build_with_sphinx-build`
.. note::
``make htmlsingle`` adds ``rst/`` to the beginning of the path you provide in ``rst=``, so you can't type the filename with autocomplete. Here are the error messages you will see if you get this wrong:
- If you run ``make htmlsingle`` from the ``docs/docsite/rst/`` directory: ``make: *** No rule to make target `htmlsingle'. Stop.``
- If you run ``make htmlsingle`` from the ``docs/docsite/`` directory with the full path to your rST document: ``sphinx-build: error: cannot find files ['rst/rst/community/documentation_contributions.rst']``.
Building all the rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^
To build all the rST files without any module documentation:
.. code-block:: bash
MODULES=none make webdocs
Building module docs and rST pages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To build documentation for a few modules plus all the rST files, use a comma-separated list:
.. code-block:: bash
MODULES=one_module,another_module make webdocs
To build all the module documentation plus all the rST files:
.. code-block:: bash
make webdocs
.. _build_with_sphinx-build:
Building rST files with ``sphinx-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advanced users can build one or more rST files with the sphinx utility directly. ``sphinx-build`` returns misleading ``undefined label`` warnings if you only build a single page, because it does not create internal links. However, ``sphinx-build`` returns more extensive syntax feedback, including warnings about indentation errors and ``x-string without end-string`` warnings. This can be useful, especially if you're creating a new page from scratch. To build a page or pages with ``sphinx-build``:
.. code-block:: bash
sphinx-build [options] sourcedir outdir [filenames...]
You can specify filenames, or ``–a`` for all files, or omit both to compile only new/changed files.
For example:
.. code-block:: bash
sphinx-build -b html -c rst/ rst/dev_guide/ _build/html/dev_guide/ rst/dev_guide/developing_modules_documenting.rst
Running the final tests
^^^^^^^^^^^^^^^^^^^^^^^
When you submit a documentation pull request, automated tests are run. Those same tests can be run locally. To do so, navigate to the repository's top directory and run:
.. code-block:: bash
make clean &&
bin/ansible-test sanity --test docs-build &&
bin/ansible-test sanity --test rstcheck
Unfortunately, leftover rST-files from previous document-generating can occasionally confuse these tests. It is therefore safest to run them on a clean copy of the repository, which is the purpose of ``make clean``. If you type these three lines one at a time and manually check the success of each, you do not need the ``&&``.
Joining the documentation working group
=======================================
The Documentation Working Group is just getting started, please visit the `community repo <https://github.com/ansible/community>`_ for more information.
.. seealso::
:ref:`More about testing module documentation <testing_module_documentation>`
:ref:`More about documenting modules <module_documenting>`
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/index.rst
|
.. _ansible_community_guide:
***********************
Ansible Community Guide
***********************
Welcome to the Ansible Community Guide!
The purpose of this guide is to teach you everything you need to know about being a contributing member of the Ansible community. All types of contributions are welcome, and necessary to Ansible's continued success.
This page outlines the most common situations and questions that bring readers to this section. If you prefer a traditional table of contents, there's one at the bottom of the page.
Getting started
===============
* I'm new to the community. Where can I find the Ansible :ref:`code_of_conduct`?
* I'd like to know what I'm agreeing to when I contribute to Ansible. Does Ansible have a :ref:`contributor_license_agreement`?
* I'd like to contribute but I'm not sure how. Are there :ref:`easy ways to contribute <how_can_i_help>`?
* I want to talk to other Ansible users. How do I find an `Ansible Meetup near me <https://www.meetup.com/topics/ansible/>`_?
* I have a question. Which :ref:`Ansible email lists and IRC channels <communication>` will help me find answers?
* I want to learn more about Ansible. What can I do?
* `Read books <https://www.ansible.com/resources/ebooks>`_.
* `Get certified <https://www.ansible.com/products/training-certification>`_.
* `Attend events <https://www.ansible.com/community/events>`_.
* `Review getting started guides <https://www.ansible.com/resources/get-started>`_.
* `Watch videos <https://www.ansible.com/resources/videos>`_ - includes Ansible Automates, AnsibleFest & webinar recordings.
* I'd like updates about new Ansible versions. How are `new releases announced <https://groups.google.com/forum/#!forum/ansible-announce>`_?
* I want to use the current release. How do I know which :ref:`releases are current <release_schedule>`?
Going deeper
============
* I think Ansible is broken. How do I :ref:`report a bug <reporting_bugs>`?
* I need functionality that Ansible doesn't offer. How do I :ref:`request a feature <request_features>`?
* I'm waiting for a particular feature. How do I see what's :ref:`planned for future Ansible Releases <roadmaps>`?
* I have a specific Ansible interest or expertise (for example, VMware, Linode, etc.). How do I get involved in a :ref:`working group <working_group_list>`?
* I'd like to participate in conversations about features and fixes. How do I review GitHub issues and pull requests?
* I found a typo or another problem on docs.ansible.com. How can I :ref:`improve the documentation <community_documentation_contributions>`?
Working with the Ansible repo
=============================
* I want to code my first changes to Ansible. How do I :ref:`set up my Python development environment <environment_setup>`?
* I'd like to get more efficient as a developer. How can I find :ref:`editors, linters, and other tools <other_tools_and_programs>` that will support my Ansible development efforts?
* I want my PR to meet Ansible's guidelines. Where can I find guidance on :ref:`coding in Ansible <developer_guide>`?
* I want to learn more about Ansible roadmaps, releases, and projects. How do I find information on :ref:`the development cycle <community_development_process>`?
* I'd like to connect Ansible to a new API or other resource. How do I :ref:`contribute a group of related modules <developing_modules_in_groups>`?
* My pull request is marked ``needs_rebase``. How do I :ref:`rebase my PR <rebase_guide>`?
* I'm using an older version of Ansible and want a bug fixed in my version that's already been fixed on the ``devel`` branch. How do I :ref:`backport a bugfix PR <backport_process>`?
* I have an open pull request with a failing test. How do I learn about Ansible's :ref:`testing (CI) process <developing_testing>`?
* I'm ready to step up as a module maintainer. What are the :ref:`guidelines for maintainers <maintainers>`?
* A module I maintain is obsolete. How do I :ref:`deprecate a module <deprecating_modules>`?
Traditional Table of Contents
=============================
If you prefer to read the entire Community Guide, here's a list of the pages in order:
.. toctree::
:maxdepth: 2
code_of_conduct
how_can_I_help
reporting_bugs_and_features
documentation_contributions
communication
development_process
contributor_license_agreement
triage_process
other_tools_and_programs
../dev_guide/style_guide/index
.. toctree::
:caption: Guidelines for specific types of contributors
:maxdepth: 1
committer_guidelines
maintainers
release_managers
github_admins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/maintainers.rst
|
.. _maintainers:
****************************
Module Maintainer Guidelines
****************************
.. contents:: Topics
Thank you for being a maintainer of part of Ansible's codebase. This guide provides module maintainers an overview of their responsibilities, resources for additional information, and links to helpful tools.
In addition to the information below, module maintainers should be familiar with:
* :ref:`General Ansible community development practices <ansible_community_guide>`
* Documentation on :ref:`module development <developing_modules>`
Maintainer responsibilities
===========================
When you contribute a new module to the `ansible/ansible <https://github.com/ansible/ansible>`_ repository, you become the maintainer for that module once it has been merged. Maintainership empowers you with the authority to accept, reject, or request revisions to pull requests on your module -- but as they say, "with great power comes great responsibility."
Maintainers of Ansible modules are expected to provide feedback, responses, or actions on pull requests or issues to the module(s) they maintain in a reasonably timely manner.
It is also recommended that you occasionally revisit the `contribution guidelines <https://github.com/ansible/ansible/blob/devel/.github/CONTRIBUTING.md>`_, as they are continually refined. Occasionally, you may be requested to update your module to move it closer to the general accepted standard requirements. We hope for this to be infrequent, and will always be a request with a fair amount of lead time (ie: not by tomorrow!).
Finally, following the `ansible-devel <https://groups.google.com/forum/#!forum/ansible-devel>`_ mailing list can be a great way to participate in the broader Ansible community, and a place where you can influence the overall direction, quality, and goals of Ansible and its modules. If you're not on this relatively low-volume list, please join us here: https://groups.google.com/forum/#!forum/ansible-devel
The Ansible community hopes that you will find that maintaining your module is as rewarding for you as having the module is for the wider community.
Pull requests, issues, and workflow
===================================
Pull requests
-------------
Module pull requests are located in the `main Ansible repository <https://github.com/ansible/ansible/pulls>`_.
Because of the high volume of pull requests, notification of PRs to specific modules are routed by an automated bot to the appropriate maintainer for handling. It is recommended that you set an appropriate notification process to receive notifications which mention your GitHub ID.
Issues
------
Issues for modules, including bug reports, documentation bug reports, and feature requests, are tracked in the `ansible repository <https://github.com/ansible/ansible/issues>`_.
Issues for modules are routed to their maintainers via an automated process. This process is still being refined, and currently depends upon the issue creator to provide adequate details (specifically, providing the proper module name) in order to route it correctly. If you are a maintainer of a specific module, it is recommended that you periodically search module issues for issues which mention your module's name (or some variation on that name), as well as setting an appropriate notification process for receiving notification of mentions of your GitHub ID.
PR workflow
-----------
Automated routing of pull requests is handled by a tool called `Ansibot <https://github.com/ansible/ansibullbot>`_.
Being moderately familiar with how the workflow behind the bot operates can be helpful to you, and -- should things go awry -- your feedback can be helpful to the folks that continually help Ansibullbot to evolve.
A detailed explanation of the PR workflow can be seen in the :ref:`community_development_process`.
Maintainers (BOTMETA.yml)
-------------------------
The full list of maintainers is located in `BOTMETA.yml <https://github.com/ansible/ansible/blob/devel/.github/BOTMETA.yml>`_.
Adding and removing maintainers
===============================
Communities change over time, and no one maintains a module forever. If you'd like to propose an additional maintainer for your module, please submit a PR to ``BOTMETA.yml`` with the GitHub username of the new maintainer.
If you'd like to step down as a maintainer, please submit a PR to the ``BOTMETA.yml`` removing your GitHub ID from the module in question. If that would leave the module with no maintainers, put "ansible" as the maintainer. This will indicate that the module is temporarily without a maintainer, and the Ansible community team will search for a new maintainer.
Tools and other Resources
=========================
* `PRs in flight, organized by directory <https://ansible.sivel.net/pr/byfile.html>`_
* `Ansibullbot <https://github.com/ansible/ansibullbot>`_
* :ref:`community_development_process`
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/other_tools_and_programs.rst
|
.. _other_tools_and_programs:
########################
Other Tools And Programs
########################
.. contents::
:local:
The Ansible community uses a range of tools for working with the Ansible project. This is a list of some of the most popular of these tools.
If you know of any other tools that should be added, this list can be updated by clicking "Edit on GitHub" on the top right of this page.
***************
Popular Editors
***************
Atom
====
An open-source, free GUI text editor created and maintained by GitHub. You can keep track of git project
changes, commit from the GUI, and see what branch you are on. You can customize the themes for different colors and install syntax highlighting packages for different languages. You can install Atom on Linux, macOS and Windows. Useful Atom plugins include:
* `language-yaml <https://atom.io/packages/language-yaml>`_ - YAML highlighting for Atom (built-in).
* `linter-js-yaml <https://atom.io/packages/linter-js-yaml>`_ - parses your YAML files in Atom through js-yaml.
Emacs
=====
A free, open-source text editor and IDE that supports auto-indentation, syntax highlighting and built in terminal shell(among other things).
* `yaml-mode <https://github.com/yoshiki/yaml-mode>`_ - YAML highlighting and syntax checking.
* `jinja2-mode <https://github.com/paradoxxxzero/jinja2-mode>`_ - Jinja2 highlighting and syntax checking.
* `magit-mode <https://github.com/magit/magit>`_ - Git porcelain within Emacs.
PyCharm
=======
A full IDE (integrated development environment) for Python software development. It ships with everything you need to write python scripts and complete software, including support for YAML syntax highlighting. It's a little overkill for writing roles/playbooks, but it can be a very useful tool if you write modules and submit code for Ansible. Can be used to debug the Ansible engine.
Sublime
=======
A closed-source, subscription GUI text editor. You can customize the GUI with themes and install packages for language highlighting and other refinements. You can install Sublime on Linux, macOS and Windows. Useful Sublime plugins include:
* `GitGutter <https://packagecontrol.io/packages/GitGutter>`_ - shows information about files in a git repository.
* `SideBarEnhancements <https://packagecontrol.io/packages/SideBarEnhancements>`_ - provides enhancements to the operations on Sidebar of Files and Folders.
* `Sublime Linter <https://packagecontrol.io/packages/SublimeLinter>`_ - a code-linting framework for Sublime Text 3.
* `Pretty YAML <https://packagecontrol.io/packages/Pretty%20YAML>`_ - prettifies YAML for Sublime Text 2 and 3.
* `Yamllint <https://packagecontrol.io/packages/SublimeLinter-contrib-yamllint>`_ - a Sublime wrapper around yamllint.
Visual Studio Code
==================
An open-source, free GUI text editor created and maintained by Microsoft. Useful Visual Studio Code plugins include:
* `YAML Support by Red Hat <https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml>`_ - provides YAML support via yaml-language-server with built-in Kubernetes and Kedge syntax support.
* `Ansible Syntax Highlighting Extension <https://marketplace.visualstudio.com/items?itemName=haaaad.ansible>`_ - YAML & Jinja2 support.
* `Visual Studio Code extension for Ansible <https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible>`_ - provides autocompletion, syntax highlighting.
vim
===
An open-source, free command-line text editor. Useful vim plugins include:
* `Ansible vim <https://github.com/pearofducks/ansible-vim>`_ - vim syntax plugin for Ansible 2.x, it supports YAML playbooks, Jinja2 templates, and Ansible's hosts files.
*****************
Development Tools
*****************
Finding related issues and PRs
==============================
There are various ways to find existing issues and pull requests (PRs)
- `PR by File <https://ansible.sivel.net/pr/byfile.html>`_ - shows a current list of all open pull requests by individual file. An essential tool for Ansible module maintainers.
- `jctanner's Ansible Tools <https://github.com/jctanner/ansible-tools>`_ - miscellaneous collection of useful helper scripts for Ansible development.
.. _validate-playbook-tools:
******************************
Tools for Validating Playbooks
******************************
- `Ansible Lint <https://github.com/ansible/ansible-lint>`_ - the official, highly configurable best-practices linter for Ansible playbooks, by Ansible.
- `Ansible Review <https://github.com/willthames/ansible-review>`_ - an extension of Ansible Lint designed for code review.
- `Molecule <https://github.com/ansible/molecule>`_ is a testing framework for Ansible plays and roles, by Ansible
- `yamllint <https://yamllint.readthedocs.io/en/stable/>`__ is a command-line utility to check syntax validity including key repetition and indentation issues.
***********
Other Tools
***********
- `Ansible cmdb <https://github.com/fboender/ansible-cmdb>`_ - takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information.
- `Ansible Inventory Grapher <https://github.com/willthames/ansible-inventory-grapher>`_ - visually displays inventory inheritance hierarchies and at what level a variable is defined in inventory.
- `Ansible Playbook Grapher <https://github.com/haidaraM/ansible-playbook-grapher>`_ - A command line tool to create a graph representing your Ansible playbook tasks and roles.
- `Ansible Shell <https://github.com/dominis/ansible-shell>`_ - an interactive shell for Ansible with built-in tab completion for all the modules.
- `Ansible Silo <https://github.com/groupon/ansible-silo>`_ - a self-contained Ansible environment via Docker.
- `Ansigenome <https://github.com/nickjj/ansigenome>`_ - a command line tool designed to help you manage your Ansible roles.
- `ARA <https://github.com/openstack/ara>`_ - records Ansible playbook runs and makes the recorded data available and intuitive for users and systems by integrating with Ansible as a callback plugin.
- `Awesome Ansible <https://github.com/jdauphant/awesome-ansible>`_ - a collaboratively curated list of awesome Ansible resources.
- `AWX <https://github.com/ansible/awx>`_ - provides a web-based user interface, REST API, and task engine built on top of Ansible. AWX is the upstream project for Red Hat Ansible Tower, part of the Red Hat Ansible Automation subscription.
- `Mitogen for Ansible <https://mitogen.networkgenomics.com/ansible_detailed.html>`_ - uses the `Mitogen <https://github.com/dw/mitogen/>`_ library to execute Ansible playbooks in a more efficient way (decreases the execution time).
- `OpsTools-ansible <https://github.com/centos-opstools/opstools-ansible>`_ - uses Ansible to configure an environment that provides the support of `OpsTools <https://wiki.centos.org/SpecialInterestGroup/OpsTools>`_, namely centralized logging and analysis, availability monitoring, and performance monitoring.
- `TD4A <https://github.com/cidrblock/td4a>`_ - a template designer for automation. TD4A is a visual design aid for building and testing jinja2 templates. It will combine data in yaml format with a jinja2 template and render the output.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/community/reporting_bugs_and_features.rst
|
.. _reporting_bugs_and_features:
**************************************
Reporting Bugs And Requesting Features
**************************************
.. contents:: Topics
.. _reporting_bugs:
Reporting a bug
===============
Ansible practices responsible disclosure - if this is a security-related bug, email `[email protected] <mailto:[email protected]>`_ instead of filing a ticket or posting to any public groups, and you will receive a prompt response.
Ansible bugs should be reported to `github.com/ansible/ansible/issues <https://github.com/ansible/ansible/issues>`_ after
signing up for a free GitHub account. Before reporting a bug, please use the bug/issue search
to see if the issue has already been reported.
Knowing your Ansible version and the exact commands you are running, and what you expect, saves time and helps us help everyone with their issues more quickly. For that reason, we provide an issue template; please fill it out as completely and as accurately as possible.
Do not use the issue tracker for "how do I do this" type questions. These are great candidates for IRC or the mailing list instead where things are likely to be more of a discussion.
To be respectful of reviewers' time and allow us to help everyone efficiently, please provide minimal well-reduced and well-commented examples versus sharing your entire production playbook. Include playbook snippets and output where possible.
When sharing YAML in playbooks, formatting can be preserved by using `code blocks <https://help.github.com/articles/creating-and-highlighting-code-blocks/>`_.
For multiple-file content, we encourage use of gist.github.com. Online pastebin content can expire, so it's nice to have things around for a longer term if they are referenced in a ticket.
If you are not sure if something is a bug yet, you are welcome to ask about something on the :ref:`mailing list or IRC first <communication>`.
As we are a very high volume project, if you determine that you do have a bug, please be sure to open the issue yourself to ensure we have a record of it. Don't rely on someone else in the community to file the bug report for you.
.. _request_features:
Requesting a feature
====================
The best way to get a feature into Ansible is to :ref:`submit a pull request <community_pull_requests>`.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,224 |
Remove Latin phrases from the docs
|
##### SUMMARY
Latin words and phrases like `e.g.` or `etc.` are easily understood by native English speakers and speakers of European languages who speak some English. They are harder to understand for speakers of Asian languages who speak some English. They are also tricky for automated translation.
Add a row to the Style Guide cheat-sheet about avoiding Latin words and phrases.
Remove existing Latin words and phrases from the documentation.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
##### ANSIBLE VERSION
all
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/62224
|
https://github.com/ansible/ansible/pull/62419
|
8b9c2533b5d6a64de1f79015ac590c119d51a5d1
|
e7436e278f8945dd73b066c47280c1a17bc18ebe
| 2019-09-12T20:03:46Z |
python
| 2019-09-26T19:12:24Z |
docs/docsite/rst/dev_guide/style_guide/index.rst
|
.. _style_guide:
*******************
Ansible style guide
*******************
Welcome to the Ansible style guide!
To create clear, concise, consistent, useful materials on docs.ansible.com, follow these guidelines:
.. contents::
:local:
Linguistic guidelines
=====================
We want the Ansible documentation to be:
* clear
* direct
* conversational
* easy to translate
We want reading the docs to feel like having an experienced, friendly colleague
explain how Ansible works.
Stylistic cheat-sheet
---------------------
This cheat-sheet illustrates a few rules that help achieve the "Ansible tone":
+-------------------------------+------------------------------+----------------------------------------+
| Rule | Good example | Bad example |
+===============================+==============================+========================================+
| Use active voice | You can run a task by | A task can be run by |
+-------------------------------+------------------------------+----------------------------------------+
| Use the present tense | This command creates a | This command will create a |
+-------------------------------+------------------------------+----------------------------------------+
| Address the reader | As you expand your inventory | When the number of managed nodes grows |
+-------------------------------+------------------------------+----------------------------------------+
| Use standard English | Return to this page | Hop back to this page |
+-------------------------------+------------------------------+----------------------------------------+
| Use American English | The color of the output | The colour of the output |
+-------------------------------+------------------------------+----------------------------------------+
Header case
-----------
Headers should be written in sentence case. For example, this section's title is
``Header case``, not ``Header Case`` or ``HEADER CASE``.
reStructuredText guidelines
===========================
The Ansible documentation is written in reStructuredText and processed by Sphinx.
We follow these technical or mechanical guidelines on all rST pages:
Header notation
---------------
`Section headers in reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections>`_
can use a variety of notations.
Sphinx will 'learn on the fly' when creating a hierarchy of headers.
To make our documents easy to read and to edit, we follow a standard set of header notations.
We use:
* ``###`` with overline, for parts:
.. code-block:: rst
###############
Developer guide
###############
* ``***`` with overline, for chapters:
.. code-block:: rst
*******************
Ansible style guide
*******************
* ``===`` for sections:
.. code-block:: rst
Mechanical guidelines
=====================
* ``---`` for subsections:
.. code-block:: rst
Internal navigation
-------------------
* ``^^^`` for sub-subsections:
.. code-block:: rst
Adding anchors
^^^^^^^^^^^^^^
* ``"""`` for paragraphs:
.. code-block:: rst
Paragraph that needs a title
""""""""""""""""""""""""""""
Internal navigation
-------------------
`Anchors (also called labels) and links <http://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#ref-role>`_
work together to help users find related content.
Local tables of contents also help users navigate quickly to the information they need.
All internal links should use the ``:ref:`` syntax.
Every page should have at least one anchor to support internal ``:ref:`` links.
Long pages, or pages with multiple levels of headers, can also include a local TOC.
Adding anchors
^^^^^^^^^^^^^^
* Include at least one anchor on every page
* Place the main anchor above the main header
* If the file has a unique title, use that for the main page anchor::
.. _unique_page::
* You may also add anchors elsewhere on the page
Adding internal links
^^^^^^^^^^^^^^^^^^^^^
* All internal links must use ``:ref:`` syntax. These links both point to the anchor defined above:
.. code-block:: rst
:ref:`unique_page`
:ref:`this page <unique_page>`
The second example adds custom text for the link.
Adding links to modules and plugins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Module links use the module name followed by ``_module`` for the anchor.
* Plugin links use the plugin name followed by the plugin type. For example, :ref:`enable become plugin <enable_become>`).
.. code-block:: rst
:ref:`this module <this_module>``
:ref:`that connection plugin <that_connection>`
Adding local TOCs
^^^^^^^^^^^^^^^^^
The page you're reading includes a `local TOC <http://docutils.sourceforge.net/docs/ref/rst/directives.html#table-of-contents>`_.
If you include a local TOC:
* place it below, not above, the main heading and (optionally) introductory text
* use the ``:local:`` directive so the page's main header is not included
* do not include a title
The syntax is:
.. code-block:: rst
.. contents::
:local:
More resources
==============
These pages offer more help with grammatical, stylistic, and technical rules for documentation.
.. toctree::
:maxdepth: 1
basic_rules
voice_style
trademarks
grammar_punctuation
spelling_word_choice
resources
.. seealso::
:ref:`community_documentation_contributions`
How to contribute to the Ansible documentation
:ref:`testing_documentation_locally`
How to build the Ansible documentation
`irc.freenode.net <http://irc.freenode.net>`_
#ansible-docs IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,052 |
[nsupdate] : please use the server parameter in dns.resolver.zone_for_name
|
##### SUMMARY
server parameter is required but not used as for zone resolution.
##### ISSUE TYPE
dns.resolver.zone_for_name is using system resolver so it can't be used to manage DNS entries from an external DNS without zone parameter. Zone resolution fails in this case.
The server parameter should be used as resolver.
##### COMPONENT NAME
nsupdate
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible 2.7.5
config file = /home/sylvain/.ansible.cfg
configured module search path = [u'/usr/local/share/ansible']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
```
ANSIBLE_PIPELINING(/home/user/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/user/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60m
ANSIBLE_SSH_CONTROL_PATH(/home/user/.ansible.cfg) = ~/.ssh/master-%%C
ANY_ERRORS_FATAL(/home/user/.ansible.cfg) = True
COLOR_DIFF_ADD(/home/user/.ansible.cfg) = yellow
COLOR_DIFF_LINES(/home/user/.ansible.cfg) = yellow
COLOR_DIFF_REMOVE(/home/user/.ansible.cfg) = yellow
DEFAULT_GATHERING(/home/user/.ansible.cfg) = smart
DEFAULT_GATHER_SUBSET(/home/user/.ansible.cfg) = min,network,hardware
DEFAULT_JINJA2_EXTENSIONS(/home/user/.ansible.cfg) = jinja2.ext.do
DEFAULT_SCP_IF_SSH(/home/user/.ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/user/.ansible.cfg) = 30
DIFF_ALWAYS(/home/user/.ansible.cfg) = True
INJECT_FACTS_AS_VARS(/home/user/.ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/home/user/.ansible.cfg) = [u'.pyc', u'.pyo', u'.swp', u'.bak', u'.md', u'.txt', u'~', u'.orig', u'.ini', u'.cfg', u'.retry', u'.old', u'.dpkg-old', u'.dpkg-new']
PERSISTENT_CONTROL_PATH_DIR(/home/user/.ansible.cfg) = /home/user/.ansible/pc
RETRY_FILES_ENABLED(/home/user/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
```
- name: PTR entry
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.0.0.1"
record: "4.0.31.172.in-addr.arpa."
type: "PTR"
value: "server1.local."
```
This task failed :
##### EXPECTED RESULTS
Zone resolver should use server parameter : 10.0.0.1.
##### ACTUAL RESULTS
Exit with this error
```
Zone resolver error (NoNameservers): All nameservers failed to answer the query 4.0.31.172.in-addr.arpa. IN SOA: Server 127.0.0.1 UDP port 53 answered SERVFAIL
```
|
https://github.com/ansible/ansible/issues/62052
|
https://github.com/ansible/ansible/pull/62329
|
ee91714eb2c4f65862a227f9ced9c7b656265a0f
|
75dfe6c88aee21b26e86b0aa03dacb07b627c80e
| 2019-09-10T10:17:03Z |
python
| 2019-09-28T17:34:28Z |
changelogs/fragments/62329-nsupdate-lookup-internal-zones.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,052 |
[nsupdate] : please use the server parameter in dns.resolver.zone_for_name
|
##### SUMMARY
server parameter is required but not used as for zone resolution.
##### ISSUE TYPE
dns.resolver.zone_for_name is using system resolver so it can't be used to manage DNS entries from an external DNS without zone parameter. Zone resolution fails in this case.
The server parameter should be used as resolver.
##### COMPONENT NAME
nsupdate
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```ansible 2.7.5
config file = /home/sylvain/.ansible.cfg
configured module search path = [u'/usr/local/share/ansible']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
```
ANSIBLE_PIPELINING(/home/user/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(/home/user/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60m
ANSIBLE_SSH_CONTROL_PATH(/home/user/.ansible.cfg) = ~/.ssh/master-%%C
ANY_ERRORS_FATAL(/home/user/.ansible.cfg) = True
COLOR_DIFF_ADD(/home/user/.ansible.cfg) = yellow
COLOR_DIFF_LINES(/home/user/.ansible.cfg) = yellow
COLOR_DIFF_REMOVE(/home/user/.ansible.cfg) = yellow
DEFAULT_GATHERING(/home/user/.ansible.cfg) = smart
DEFAULT_GATHER_SUBSET(/home/user/.ansible.cfg) = min,network,hardware
DEFAULT_JINJA2_EXTENSIONS(/home/user/.ansible.cfg) = jinja2.ext.do
DEFAULT_SCP_IF_SSH(/home/user/.ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/.ansible.cfg) = debug
DEFAULT_TIMEOUT(/home/user/.ansible.cfg) = 30
DIFF_ALWAYS(/home/user/.ansible.cfg) = True
INJECT_FACTS_AS_VARS(/home/user/.ansible.cfg) = False
INVENTORY_IGNORE_EXTS(/home/user/.ansible.cfg) = [u'.pyc', u'.pyo', u'.swp', u'.bak', u'.md', u'.txt', u'~', u'.orig', u'.ini', u'.cfg', u'.retry', u'.old', u'.dpkg-old', u'.dpkg-new']
PERSISTENT_CONTROL_PATH_DIR(/home/user/.ansible.cfg) = /home/user/.ansible/pc
RETRY_FILES_ENABLED(/home/user/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
Debian 9.9
##### STEPS TO REPRODUCE
```
- name: PTR entry
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.0.0.1"
record: "4.0.31.172.in-addr.arpa."
type: "PTR"
value: "server1.local."
```
This task failed :
##### EXPECTED RESULTS
Zone resolver should use server parameter : 10.0.0.1.
##### ACTUAL RESULTS
Exit with this error
```
Zone resolver error (NoNameservers): All nameservers failed to answer the query 4.0.31.172.in-addr.arpa. IN SOA: Server 127.0.0.1 UDP port 53 answered SERVFAIL
```
|
https://github.com/ansible/ansible/issues/62052
|
https://github.com/ansible/ansible/pull/62329
|
ee91714eb2c4f65862a227f9ced9c7b656265a0f
|
75dfe6c88aee21b26e86b0aa03dacb07b627c80e
| 2019-09-10T10:17:03Z |
python
| 2019-09-28T17:34:28Z |
lib/ansible/modules/net_tools/nsupdate.py
|
#!/usr/bin/python
# (c) 2016, Marcin Skarbek <[email protected]>
# (c) 2016, Andreas Olsson <[email protected]>
# (c) 2017, Loic Blot <[email protected]>
#
# This module was ported from https://github.com/mskarbek/ansible-nsupdate
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: nsupdate
short_description: Manage DNS records.
description:
- Create, update and remove DNS records using DDNS updates
version_added: "2.3"
requirements:
- dnspython
author: "Loic Blot (@nerzhul)"
options:
state:
description:
- Manage DNS record.
choices: ['present', 'absent']
default: 'present'
server:
description:
- Apply DNS modification on this server.
required: true
port:
description:
- Use this TCP port when connecting to C(server).
default: 53
version_added: 2.5
key_name:
description:
- Use TSIG key name to authenticate against DNS C(server)
key_secret:
description:
- Use TSIG key secret, associated with C(key_name), to authenticate against C(server)
key_algorithm:
description:
- Specify key algorithm used by C(key_secret).
choices: ['HMAC-MD5.SIG-ALG.REG.INT', 'hmac-md5', 'hmac-sha1', 'hmac-sha224', 'hmac-sha256', 'hmac-sha384',
'hmac-sha512']
default: 'hmac-md5'
zone:
description:
- DNS record will be modified on this C(zone).
- When omitted DNS will be queried to attempt finding the correct zone.
- Starting with Ansible 2.7 this parameter is optional.
record:
description:
- Sets the DNS record to modify. When zone is omitted this has to be absolute (ending with a dot).
required: true
type:
description:
- Sets the record type.
default: 'A'
ttl:
description:
- Sets the record TTL.
default: 3600
value:
description:
- Sets the record value.
protocol:
description:
- Sets the transport protocol (TCP or UDP). TCP is the recommended and a more robust option.
default: 'tcp'
choices: ['tcp', 'udp']
version_added: 2.8
'''
EXAMPLES = '''
- name: Add or modify ansible.example.org A to 192.168.1.1"
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "ansible"
value: "192.168.1.1"
- name: Add or modify ansible.example.org A to 192.168.1.1, 192.168.1.2 and 192.168.1.3"
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "ansible"
value: ["192.168.1.1", "192.168.1.2", "192.168.1.3"]
- name: Remove puppet.example.org CNAME
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "puppet"
type: "CNAME"
state: absent
- name: Add 1.1.168.192.in-addr.arpa. PTR for ansible.example.org
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
record: "1.1.168.192.in-addr.arpa."
type: "PTR"
value: "ansible.example.org."
state: present
- name: Remove 1.1.168.192.in-addr.arpa. PTR
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
record: "1.1.168.192.in-addr.arpa."
type: "PTR"
state: absent
'''
RETURN = '''
changed:
description: If module has modified record
returned: success
type: str
record:
description: DNS record
returned: success
type: str
sample: 'ansible'
ttl:
description: DNS record TTL
returned: success
type: int
sample: 86400
type:
description: DNS record type
returned: success
type: str
sample: 'CNAME'
value:
description: DNS record value(s)
returned: success
type: list
sample: '192.168.1.1'
zone:
description: DNS record zone
returned: success
type: str
sample: 'example.org.'
dns_rc:
description: dnspython return code
returned: always
type: int
sample: 4
dns_rc_str:
description: dnspython return code (string representation)
returned: always
type: str
sample: 'REFUSED'
'''
import traceback
from binascii import Error as binascii_error
from socket import error as socket_error
DNSPYTHON_IMP_ERR = None
try:
import dns.update
import dns.query
import dns.tsigkeyring
import dns.message
import dns.resolver
HAVE_DNSPYTHON = True
except ImportError:
DNSPYTHON_IMP_ERR = traceback.format_exc()
HAVE_DNSPYTHON = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class RecordManager(object):
def __init__(self, module):
self.module = module
if module.params['zone'] is None:
if module.params['record'][-1] != '.':
self.module.fail_json(msg='record must be absolute when omitting zone parameter')
try:
self.zone = dns.resolver.zone_for_name(self.module.params['record']).to_text()
except (dns.exception.Timeout, dns.resolver.NoNameservers, dns.resolver.NoRootSOA) as e:
self.module.fail_json(msg='Zone resolver error (%s): %s' % (e.__class__.__name__, to_native(e)))
if self.zone is None:
self.module.fail_json(msg='Unable to find zone, dnspython returned None')
else:
self.zone = module.params['zone']
if self.zone[-1] != '.':
self.zone += '.'
if module.params['record'][-1] != '.':
self.fqdn = module.params['record'] + '.' + self.zone
else:
self.fqdn = module.params['record']
if module.params['key_name']:
try:
self.keyring = dns.tsigkeyring.from_text({
module.params['key_name']: module.params['key_secret']
})
except TypeError:
module.fail_json(msg='Missing key_secret')
except binascii_error as e:
module.fail_json(msg='TSIG key error: %s' % to_native(e))
else:
self.keyring = None
if module.params['key_algorithm'] == 'hmac-md5':
self.algorithm = 'HMAC-MD5.SIG-ALG.REG.INT'
else:
self.algorithm = module.params['key_algorithm']
if self.module.params['type'].lower() == 'txt':
self.value = list(map(self.txt_helper, self.module.params['value']))
else:
self.value = self.module.params['value']
self.dns_rc = 0
def txt_helper(self, entry):
if entry[0] == '"' and entry[-1] == '"':
return entry
return '"{text}"'.format(text=entry)
def __do_update(self, update):
response = None
try:
if self.module.params['protocol'] == 'tcp':
response = dns.query.tcp(update, self.module.params['server'], timeout=10, port=self.module.params['port'])
else:
response = dns.query.udp(update, self.module.params['server'], timeout=10, port=self.module.params['port'])
except (dns.tsig.PeerBadKey, dns.tsig.PeerBadSignature) as e:
self.module.fail_json(msg='TSIG update error (%s): %s' % (e.__class__.__name__, to_native(e)))
except (socket_error, dns.exception.Timeout) as e:
self.module.fail_json(msg='DNS server error: (%s): %s' % (e.__class__.__name__, to_native(e)))
return response
def create_or_update_record(self):
result = {'changed': False, 'failed': False}
exists = self.record_exists()
if exists in [0, 2]:
if self.module.check_mode:
self.module.exit_json(changed=True)
if exists == 0:
self.dns_rc = self.create_record()
if self.dns_rc != 0:
result['msg'] = "Failed to create DNS record (rc: %d)" % self.dns_rc
elif exists == 2:
self.dns_rc = self.modify_record()
if self.dns_rc != 0:
result['msg'] = "Failed to update DNS record (rc: %d)" % self.dns_rc
if self.dns_rc != 0:
result['failed'] = True
else:
result['changed'] = True
else:
result['changed'] = False
return result
def create_record(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
for entry in self.value:
try:
update.add(self.module.params['record'],
self.module.params['ttl'],
self.module.params['type'],
entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
return dns.message.Message.rcode(response)
def modify_record(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
update.delete(self.module.params['record'], self.module.params['type'])
for entry in self.value:
try:
update.add(self.module.params['record'],
self.module.params['ttl'],
self.module.params['type'],
entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
return dns.message.Message.rcode(response)
def remove_record(self):
result = {'changed': False, 'failed': False}
if self.record_exists() == 0:
return result
# Check mode and record exists, declared fake change.
if self.module.check_mode:
self.module.exit_json(changed=True)
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
update.delete(self.module.params['record'], self.module.params['type'])
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc != 0:
result['failed'] = True
result['msg'] = "Failed to delete record (rc: %d)" % self.dns_rc
else:
result['changed'] = True
return result
def record_exists(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
try:
update.present(self.module.params['record'], self.module.params['type'])
except dns.rdatatype.UnknownRdatatype as e:
self.module.fail_json(msg='Record error: {0}'.format(to_native(e)))
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc == 0:
if self.module.params['state'] == 'absent':
return 1
for entry in self.value:
try:
update.present(self.module.params['record'], self.module.params['type'], entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc == 0:
if self.ttl_changed():
return 2
else:
return 1
else:
return 2
else:
return 0
def ttl_changed(self):
query = dns.message.make_query(self.fqdn, self.module.params['type'])
try:
if self.module.params['protocol'] == 'tcp':
lookup = dns.query.tcp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
else:
lookup = dns.query.udp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
except (socket_error, dns.exception.Timeout) as e:
self.module.fail_json(msg='DNS server error: (%s): %s' % (e.__class__.__name__, to_native(e)))
current_ttl = lookup.answer[0].ttl
return current_ttl != self.module.params['ttl']
def main():
tsig_algs = ['HMAC-MD5.SIG-ALG.REG.INT', 'hmac-md5', 'hmac-sha1', 'hmac-sha224',
'hmac-sha256', 'hmac-sha384', 'hmac-sha512']
module = AnsibleModule(
argument_spec=dict(
state=dict(required=False, default='present', choices=['present', 'absent'], type='str'),
server=dict(required=True, type='str'),
port=dict(required=False, default=53, type='int'),
key_name=dict(required=False, type='str'),
key_secret=dict(required=False, type='str', no_log=True),
key_algorithm=dict(required=False, default='hmac-md5', choices=tsig_algs, type='str'),
zone=dict(required=False, default=None, type='str'),
record=dict(required=True, type='str'),
type=dict(required=False, default='A', type='str'),
ttl=dict(required=False, default=3600, type='int'),
value=dict(required=False, default=None, type='list'),
protocol=dict(required=False, default='tcp', choices=['tcp', 'udp'], type='str')
),
supports_check_mode=True
)
if not HAVE_DNSPYTHON:
module.fail_json(msg=missing_required_lib('dnspython'), exception=DNSPYTHON_IMP_ERR)
if len(module.params["record"]) == 0:
module.fail_json(msg='record cannot be empty.')
record = RecordManager(module)
result = {}
if module.params["state"] == 'absent':
result = record.remove_record()
elif module.params["state"] == 'present':
result = record.create_or_update_record()
result['dns_rc'] = record.dns_rc
result['dns_rc_str'] = dns.rcode.to_text(record.dns_rc)
if result['failed']:
module.fail_json(**result)
else:
result['record'] = dict(zone=record.zone,
record=module.params['record'],
type=module.params['type'],
ttl=module.params['ttl'],
value=record.value)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,324 |
docker container loses networks on restarts when using with init: true
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
With `init: true` set in the `docker_container` module, a container restarted with `docker_container` loses it network settings and is attached to the default bridge again.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
* `docker_container`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /home/.../.../ansible.cfg
configured module search path = ['/home/.../.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
Default configuration without any changes.
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* ArchLinux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: create test container
hosts: localhost
gather_facts: false
become: true
tasks:
- name: create a network
docker_network:
name: special
- name: create a container with init set to true
docker_container:
name: 'test'
state: started
pull: true
init: true
image: 'traefik'
restart_policy: always
networks_cli_compatible: true
networks:
- name: special
- name: restart test container after creation
docker_container:
name: 'test'
state: started
restart: true
...
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The container should be restarted with the same settings and the diff should show the same as with `init: false`:
```
TASK [restart test container after creation] **************************************************
--- before
+++ after
@@ -1,4 +1,4 @@
{
- "restarted": false,
+ "restarted": true,
"running": true
}
```
`docker inspect` still shows the custom network to be configured:
```
$ docker inspect test -f "{{ .NetworkSettings.Networks }}"
map[special:0xc00056a000]
```
##### ACTUAL RESULTS
```paste below
TASK [restart test container after creation] **************************************************
--- before
+++ after
@@ -1,4 +1,4 @@
{
- "init": true,
+ "init": false,
"running": true
}
```
`docker inspect` confirms that container has lost its custom network:
```
$ docker inspect test -f "{{ .NetworkSettings.Networks }}"
map[bridge:0xc0001f0f00]
```
|
https://github.com/ansible/ansible/issues/62324
|
https://github.com/ansible/ansible/pull/62325
|
75dfe6c88aee21b26e86b0aa03dacb07b627c80e
|
fd627e3b7876b43f2999cb90349c398f26b92fa3
| 2019-09-15T10:43:33Z |
python
| 2019-09-29T12:52:47Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with --check and --diff to view config difference and list of actions to be taken.
version_added: "2.1"
options:
auto_remove:
description:
- enable auto-removal of the container on daemon side when the container's process exits
type: bool
default: no
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts.
A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not. Only options which correspond to the state of a container as handled
by the Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to I(all) comparisons.
- See the examples for details.
type: dict
version_added: "2.8"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
type: int
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, the task will reflect the status of the container run (failed if the command failed).
type: bool
default: yes
devices:
description:
- "List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>"
type: list
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- list of DNS options
type: list
dns_servers:
description:
- List of custom DNS servers.
type: list
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in C(env), then C(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default ENTRYPOINT of the image.
type: list
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
type: list
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
healthcheck:
description:
- 'Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.'
- 'I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)'
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- 'Time between running the check. (default: 30s)'
type: str
timeout:
description:
- 'Maximum time to allow one check to run. (default: 30s)'
type: str
retries:
description:
- 'Consecutive failures needed to report unhealthy. It accept integer value. (default: 3)'
type: int
start_period:
description:
- 'Start period for the container to initialize before starting health-retries countdown. (default: 0s)'
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
type: str
ignore_image:
description:
- When C(state) is I(present) or I(started) the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If
the image version in the registry does not match the container, the container will be
recreated. Stop this behavior by setting C(ignore_image) to I(True).
- I(Warning:) This option is ignored if C(image) or C(*) is used for the C(comparisons) option.
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The C(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
This option requires Docker API >= 1.25.
type: bool
default: no
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
type: bool
default: no
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of 'container:<name|id>' to reuse another
container's IPC namespace or 'host' to use the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
log_driver:
description:
- Specify the logging driver. Docker uses I(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
type: str
memory:
description:
- "Memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
default: '0'
memory_reservation:
description:
- "Memory soft limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap, format: C(<number>[<unit>])).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
description:
- 'Specification for mounts to be added to the container. More powerful alternative to I(volumes).'
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- 'bind'
- 'volume'
- 'tmpfs'
- 'npipe'
default: volume
read_only:
description:
- 'Whether the mount should be read-only.'
type: bool
consistency:
description:
- 'The consistency requirement for the mount.'
type: str
choices:
- 'default'
- 'consistent'
- 'cached'
- 'delegated'
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- 'private'
- 'rprivate'
- 'shared'
- 'rshared'
- 'slave'
- 'rslave'
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver)
for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are "bridge", "host", "none" or "container:<name|id>"
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value is C(host).
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the C(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if C(networks) is specified. You need to explicitly use C(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in C(networks)).
type: list
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with C(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will I(not) add the default network if C(networks) is
specified. If C(networks) is not specified, the default network will be attached."
- "Note that docker CLI also sets C(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set C(network_mode) to the name of the first network you're
adding."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed (only effective
when log_driver is set to json-file or journald.
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
type: bool
default: no
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports 'host'. Newer versions of the
Docker SDK for Python (docker) allow all values supported by the docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set -1 for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
type: bool
default: no
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are I(not) allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If C(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter com.docker.network.bridge.host_binding_ipv4.
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by com.docker.network.bridge.host_binding_ipv4.
Note that the first bridge network with a com.docker.network.bridge.host_binding_ipv4
value encountered in the list of C(networks) is the one that will be used.
type: list
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- I(Note) that images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled.
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in C(networks) parameter.
- Any default networks such as I(bridge), if not found in C(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
type: bool
default: no
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy. Place quotes around I(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) (format: C(<number>[<unit>])). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, the system uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User")
type: list
state:
description:
- 'I(absent) - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.'
- 'I(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Image version will be taken into account when comparing configuration. To ignore image
version use the ignore_image option. Use the recreate option to force the re-creation of the matching container. Use
force_kill to kill the container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.'
- 'I(started) - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Image version will be taken into
account when comparing configuration. To ignore image version use the ignore_image option. Use recreate to always
re-create a matching container, even if it is running. Use restart to force a matching container to be stopped and
restarted. Use force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated
with a removed container.'
- 'I(stopped) - Asserts that the container is first I(present), and then if the container is running moves it to a stopped
state. Use force_kill to kill a container rather than stopping it.'
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory
type: list
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
type: bool
default: no
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)"
type: list
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]"
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or Ids to get volumes from.
type: list
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if C(state) is I(absent)
- If detached is I(False), will include Output attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
if connected_networks.get(network['name'], None) is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
if network.get('ipv4_address') and network['ipv4_address'] != connected_networks[network['name']].get('IPAddress'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != connected_networks[network['name']].get('GlobalIPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], connected_networks[network['name']].get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, connected_networks[network['name']].get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=connected_networks[network['name']].get('IPAddress'),
ipv6_address=connected_networks[network['name']].get('GlobalIPv6Address'),
aliases=connected_networks[network['name']].get('Aliases'),
links=connected_networks[network['name']].get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
dummy, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists:
# New container
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not self.check_mode:
if not image or self.parameters.pull:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if not self.parameters.detach:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
def main():
argument_spec = dict(
auto_remove=dict(type='bool', default=False),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool', default=True),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool', default=False),
interactive=dict(type='bool', default=False),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str', default='0'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool', default=False),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool', default=False),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool', default=False),
recreate=dict(type='bool', default=False),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False),
tty=dict(type='bool', default=False),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,873 |
docker_container not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
docker_container is not idempotent when specifying an ip address
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/pkueck/projects/cix.de/cix-ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
n/a
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 30, CentOS 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: myhost
gather_facts: no
tasks:
- docker_network:
name: "foonet"
ipam_config:
- subnet: 172.16.44.0/24
- docker_container:
name: "foo"
state: present
image: centos
networks:
- name: foonet
ipv4_address: "172.16.44.11"
networks_cli_compatible: yes
loop: [1,2]
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
container created on first loop cycle, not touched on second one
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [myhost] ***************
TASK [docker_network] *******
ok: [myhost]
TASK [docker_container] *****
--- before
+++ after
@@ -1,9 +1,10 @@
{
- "exists": false,
+ "exists": true,
"network.foonet": {
"aliases": null,
- "ipv4_address": "",
- "ipv6_address": "",
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=1)
--- before
+++ after
@@ -1,10 +1,9 @@
{
"network.foonet": {
- "aliases": [
- "44cfded131d7"
- ],
- "ipv4_address": "",
- "ipv6_address": "",
+ "aliases": null,
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=2)
```
|
https://github.com/ansible/ansible/issues/62873
|
https://github.com/ansible/ansible/pull/62928
|
a79f7e575a9576f804007ed979aa6c1aa731dd2d
|
62c0cae29a393859522fcb391562dc1edd73ce53
| 2019-09-26T13:27:41Z |
python
| 2019-09-30T08:47:02Z |
changelogs/fragments/62928-docker_container-ip-address-idempotency.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,873 |
docker_container not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
docker_container is not idempotent when specifying an ip address
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/pkueck/projects/cix.de/cix-ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
n/a
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 30, CentOS 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: myhost
gather_facts: no
tasks:
- docker_network:
name: "foonet"
ipam_config:
- subnet: 172.16.44.0/24
- docker_container:
name: "foo"
state: present
image: centos
networks:
- name: foonet
ipv4_address: "172.16.44.11"
networks_cli_compatible: yes
loop: [1,2]
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
container created on first loop cycle, not touched on second one
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [myhost] ***************
TASK [docker_network] *******
ok: [myhost]
TASK [docker_container] *****
--- before
+++ after
@@ -1,9 +1,10 @@
{
- "exists": false,
+ "exists": true,
"network.foonet": {
"aliases": null,
- "ipv4_address": "",
- "ipv6_address": "",
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=1)
--- before
+++ after
@@ -1,10 +1,9 @@
{
"network.foonet": {
- "aliases": [
- "44cfded131d7"
- ],
- "ipv4_address": "",
- "ipv6_address": "",
+ "aliases": null,
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=2)
```
|
https://github.com/ansible/ansible/issues/62873
|
https://github.com/ansible/ansible/pull/62928
|
a79f7e575a9576f804007ed979aa6c1aa731dd2d
|
62c0cae29a393859522fcb391562dc1edd73ce53
| 2019-09-26T13:27:41Z |
python
| 2019-09-30T08:47:02Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty).
options:
auto_remove:
description:
- enable auto-removal of the container on daemon side when the container's process exits
type: bool
default: no
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts.
A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not. Only options which correspond to the state of a container as handled
by the Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to I(all) comparisons.
- See the examples for details.
type: dict
version_added: "2.8"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
type: int
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, the task will reflect the status of the container run (failed if the command failed).
type: bool
default: yes
devices:
description:
- "List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>"
type: list
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- list of DNS options
type: list
dns_servers:
description:
- List of custom DNS servers.
type: list
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in C(env), then C(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default ENTRYPOINT of the image.
type: list
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
type: list
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
healthcheck:
description:
- 'Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.'
- 'I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)'
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- 'Time between running the check. (default: 30s)'
type: str
timeout:
description:
- 'Maximum time to allow one check to run. (default: 30s)'
type: str
retries:
description:
- 'Consecutive failures needed to report unhealthy. It accept integer value. (default: 3)'
type: int
start_period:
description:
- 'Start period for the container to initialize before starting health-retries countdown. (default: 0s)'
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
type: str
ignore_image:
description:
- When C(state) is I(present) or I(started) the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If
the image version in the registry does not match the container, the container will be
recreated. Stop this behavior by setting C(ignore_image) to I(True).
- I(Warning:) This option is ignored if C(image) or C(*) is used for the C(comparisons) option.
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The C(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
This option requires Docker API >= 1.25.
type: bool
default: no
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
type: bool
default: no
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of 'container:<name|id>' to reuse another
container's IPC namespace or 'host' to use the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
log_driver:
description:
- Specify the logging driver. Docker uses I(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
type: str
memory:
description:
- "Memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
default: '0'
memory_reservation:
description:
- "Memory soft limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap, format: C(<number>[<unit>])).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
description:
- 'Specification for mounts to be added to the container. More powerful alternative to I(volumes).'
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- 'bind'
- 'volume'
- 'tmpfs'
- 'npipe'
default: volume
read_only:
description:
- 'Whether the mount should be read-only.'
type: bool
consistency:
description:
- 'The consistency requirement for the mount.'
type: str
choices:
- 'default'
- 'consistent'
- 'cached'
- 'delegated'
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- 'private'
- 'rprivate'
- 'shared'
- 'rshared'
- 'slave'
- 'rslave'
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver)
for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are "bridge", "host", "none" or "container:<name|id>"
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value is C(host).
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the C(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if C(networks) is specified. You need to explicitly use C(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in C(networks)).
type: list
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with C(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will I(not) add the default network if C(networks) is
specified. If C(networks) is not specified, the default network will be attached."
- "Note that docker CLI also sets C(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set C(network_mode) to the name of the first network you're
adding."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed (only effective
when log_driver is set to json-file or journald.
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
type: bool
default: no
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports 'host'. Newer versions of the
Docker SDK for Python (docker) allow all values supported by the docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set -1 for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
type: bool
default: no
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are I(not) allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If C(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter com.docker.network.bridge.host_binding_ipv4.
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by com.docker.network.bridge.host_binding_ipv4.
Note that the first bridge network with a com.docker.network.bridge.host_binding_ipv4
value encountered in the list of C(networks) is the one that will be used.
type: list
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- I(Note) that images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled.
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in C(networks) parameter.
- Any default networks such as I(bridge), if not found in C(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
type: bool
default: no
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy. Place quotes around I(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) (format: C(<number>[<unit>])). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, the system uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User")
type: list
state:
description:
- 'I(absent) - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.'
- 'I(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Image version will be taken into account when comparing configuration. To ignore image
version use the ignore_image option. Use the recreate option to force the re-creation of the matching container. Use
force_kill to kill the container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.'
- 'I(started) - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Image version will be taken into
account when comparing configuration. To ignore image version use the ignore_image option. Use recreate to always
re-create a matching container, even if it is running. Use restart to force a matching container to be stopped and
restarted. Use force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated
with a removed container.'
- 'I(stopped) - Asserts that the container is first I(present), and then if the container is running moves it to a stopped
state. Use force_kill to kill a container rather than stopping it.'
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory
type: list
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
type: bool
default: no
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)"
type: list
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]"
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or Ids to get volumes from.
type: list
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if C(state) is I(absent)
- If detached is I(False), will include Output attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
if connected_networks.get(network['name'], None) is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
if network.get('ipv4_address') and network['ipv4_address'] != connected_networks[network['name']].get('IPAddress'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != connected_networks[network['name']].get('GlobalIPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], connected_networks[network['name']].get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, connected_networks[network['name']].get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=connected_networks[network['name']].get('IPAddress'),
ipv6_address=connected_networks[network['name']].get('GlobalIPv6Address'),
aliases=connected_networks[network['name']].get('Aliases'),
links=connected_networks[network['name']].get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
dummy, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists:
# New container
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not self.check_mode:
if not image or self.parameters.pull:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if not self.parameters.detach:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
def main():
argument_spec = dict(
auto_remove=dict(type='bool', default=False),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool', default=True),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool', default=False),
interactive=dict(type='bool', default=False),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str', default='0'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool', default=False),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool', default=False),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool', default=False),
recreate=dict(type='bool', default=False),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False),
tty=dict(type='bool', default=False),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,873 |
docker_container not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
docker_container is not idempotent when specifying an ip address
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/pkueck/projects/cix.de/cix-ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
n/a
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 30, CentOS 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: myhost
gather_facts: no
tasks:
- docker_network:
name: "foonet"
ipam_config:
- subnet: 172.16.44.0/24
- docker_container:
name: "foo"
state: present
image: centos
networks:
- name: foonet
ipv4_address: "172.16.44.11"
networks_cli_compatible: yes
loop: [1,2]
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
container created on first loop cycle, not touched on second one
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [myhost] ***************
TASK [docker_network] *******
ok: [myhost]
TASK [docker_container] *****
--- before
+++ after
@@ -1,9 +1,10 @@
{
- "exists": false,
+ "exists": true,
"network.foonet": {
"aliases": null,
- "ipv4_address": "",
- "ipv6_address": "",
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=1)
--- before
+++ after
@@ -1,10 +1,9 @@
{
"network.foonet": {
- "aliases": [
- "44cfded131d7"
- ],
- "ipv4_address": "",
- "ipv6_address": "",
+ "aliases": null,
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=2)
```
|
https://github.com/ansible/ansible/issues/62873
|
https://github.com/ansible/ansible/pull/62928
|
a79f7e575a9576f804007ed979aa6c1aa731dd2d
|
62c0cae29a393859522fcb391562dc1edd73ce53
| 2019-09-26T13:27:41Z |
python
| 2019-09-30T08:47:02Z |
test/integration/targets/docker_container/tasks/main.yml
|
---
# Create random name prefix (for containers, networks, ...)
- name: Create random container name prefix
set_fact:
cname_prefix: "{{ 'ansible-test-%0x' % ((2**32) | random) }}"
cnames: []
dnetworks: []
- debug:
msg: "Using container name prefix {{ cname_prefix }}"
# Run the tests
- block:
- include_tasks: run-test.yml
with_fileglob:
- "tests/*.yml"
always:
- name: "Make sure all containers are removed"
docker_container:
name: "{{ item }}"
state: absent
force_kill: yes
with_items: "{{ cnames }}"
diff: no
- name: "Make sure all networks are removed"
docker_network:
name: "{{ item }}"
state: absent
force: yes
with_items: "{{ dnetworks }}"
when: docker_py_version is version('1.10.0', '>=')
diff: no
when: docker_py_version is version('1.8.0', '>=') and docker_api_version is version('1.20', '>=')
- fail: msg="Too old docker / docker-py version to run all docker_container tests!"
when: not(docker_py_version is version('3.5.0', '>=') and docker_api_version is version('1.25', '>=')) and (ansible_distribution != 'CentOS' or ansible_distribution_major_version|int > 6)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,873 |
docker_container not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
docker_container is not idempotent when specifying an ip address
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /home/pkueck/projects/cix.de/cix-ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
n/a
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 30, CentOS 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: myhost
gather_facts: no
tasks:
- docker_network:
name: "foonet"
ipam_config:
- subnet: 172.16.44.0/24
- docker_container:
name: "foo"
state: present
image: centos
networks:
- name: foonet
ipv4_address: "172.16.44.11"
networks_cli_compatible: yes
loop: [1,2]
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
container created on first loop cycle, not touched on second one
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [myhost] ***************
TASK [docker_network] *******
ok: [myhost]
TASK [docker_container] *****
--- before
+++ after
@@ -1,9 +1,10 @@
{
- "exists": false,
+ "exists": true,
"network.foonet": {
"aliases": null,
- "ipv4_address": "",
- "ipv6_address": "",
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=1)
--- before
+++ after
@@ -1,10 +1,9 @@
{
"network.foonet": {
- "aliases": [
- "44cfded131d7"
- ],
- "ipv4_address": "",
- "ipv6_address": "",
+ "aliases": null,
+ "id": "27e539c3b9af0160f02a0532ad081d1bd1ef3dd93e652095ec96d6dfd95bf2fb",
+ "ipv4_address": "172.16.44.11",
+ "ipv6_address": null,
"links": null,
"name": "foonet"
}
changed: [myhost] => (item=2)
```
|
https://github.com/ansible/ansible/issues/62873
|
https://github.com/ansible/ansible/pull/62928
|
a79f7e575a9576f804007ed979aa6c1aa731dd2d
|
62c0cae29a393859522fcb391562dc1edd73ce53
| 2019-09-26T13:27:41Z |
python
| 2019-09-30T08:47:02Z |
test/integration/targets/docker_container/tasks/tests/network.yml
|
---
- name: Registering container name
set_fact:
cname: "{{ cname_prefix ~ '-network' }}"
cname_h1: "{{ cname_prefix ~ '-network-h1' }}"
nname_1: "{{ cname_prefix ~ '-network-1' }}"
nname_2: "{{ cname_prefix ~ '-network-2' }}"
- name: Registering container name
set_fact:
cnames: "{{ cnames + [cname, cname_h1] }}"
dnetworks: "{{ dnetworks + [nname_1, nname_2] }}"
- name: Create networks
docker_network:
name: "{{ network_name }}"
state: present
loop:
- "{{ nname_1 }}"
- "{{ nname_2 }}"
loop_control:
loop_var: network_name
when: docker_py_version is version('1.10.0', '>=')
####################################################################
## network_mode ####################################################
####################################################################
- name: network_mode
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
network_mode: host
register: network_mode_1
- name: network_mode (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
network_mode: host
register: network_mode_2
- name: network_mode (change)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
network_mode: none
force_kill: yes
register: network_mode_3
- name: network_mode (container mode setup)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname_h1 }}"
state: started
register: cname_h1_id
- name: network_mode (container mode)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
network_mode: "container:{{ cname_h1_id.container.Id }}"
force_kill: yes
register: network_mode_4
- name: network_mode (container mode idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
network_mode: "container:{{ cname_h1 }}"
register: network_mode_5
- name: cleanup
docker_container:
name: "{{ container_name }}"
state: absent
force_kill: yes
loop:
- "{{ cname }}"
- "{{ cname_h1 }}"
loop_control:
loop_var: container_name
diff: no
- assert:
that:
- network_mode_1 is changed
- network_mode_1.container.HostConfig.NetworkMode == 'host'
- network_mode_2 is not changed
- network_mode_2.container.HostConfig.NetworkMode == 'host'
- network_mode_3 is changed
- network_mode_3.container.HostConfig.NetworkMode == 'none'
- network_mode_4 is changed
- network_mode_4.container.HostConfig.NetworkMode == 'container:' ~ cname_h1_id.container.Id
- network_mode_5 is not changed
- network_mode_5.container.HostConfig.NetworkMode == 'container:' ~ cname_h1_id.container.Id
####################################################################
## networks, purge_networks for networks_cli_compatible=no #########
####################################################################
- block:
- name: networks_cli_compatible=no, networks w/o purge_networks
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_1 }}"
- name: "{{ nname_2 }}"
networks_cli_compatible: no
register: networks_1
- name: networks_cli_compatible=no, networks w/o purge_networks
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_1 }}"
- name: "{{ nname_2 }}"
networks_cli_compatible: no
register: networks_2
- name: networks_cli_compatible=no, networks, purge_networks
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
purge_networks: yes
networks:
- name: bridge
- name: "{{ nname_1 }}"
networks_cli_compatible: no
force_kill: yes
register: networks_3
- name: networks_cli_compatible=no, networks, purge_networks (idempotency)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
purge_networks: yes
networks:
- name: "{{ nname_1 }}"
- name: bridge
networks_cli_compatible: no
register: networks_4
- name: networks_cli_compatible=no, networks (less networks)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: bridge
networks_cli_compatible: no
register: networks_5
- name: networks_cli_compatible=no, networks, purge_networks (less networks)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
purge_networks: yes
networks:
- name: bridge
networks_cli_compatible: no
force_kill: yes
register: networks_6
- name: networks_cli_compatible=no, networks, purge_networks (more networks)
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
purge_networks: yes
networks:
- name: bridge
- name: "{{ nname_2 }}"
networks_cli_compatible: no
force_kill: yes
register: networks_7
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
# networks_1 has networks default, 'bridge', nname_1
- networks_1 is changed
- networks_1.container.NetworkSettings.Networks | length == 3
- nname_1 in networks_1.container.NetworkSettings.Networks
- nname_2 in networks_1.container.NetworkSettings.Networks
- "'default' in networks_1.container.NetworkSettings.Networks or 'bridge' in networks_1.container.NetworkSettings.Networks"
# networks_2 has networks default, 'bridge', nname_1
- networks_2 is not changed
- networks_2.container.NetworkSettings.Networks | length == 3
- nname_1 in networks_2.container.NetworkSettings.Networks
- nname_2 in networks_1.container.NetworkSettings.Networks
- "'default' in networks_1.container.NetworkSettings.Networks or 'bridge' in networks_1.container.NetworkSettings.Networks"
# networks_3 has networks 'bridge', nname_1
- networks_3 is changed
- networks_3.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_3.container.NetworkSettings.Networks
- "'default' in networks_3.container.NetworkSettings.Networks or 'bridge' in networks_3.container.NetworkSettings.Networks"
# networks_4 has networks 'bridge', nname_1
- networks_4 is not changed
- networks_4.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_4.container.NetworkSettings.Networks
- "'default' in networks_4.container.NetworkSettings.Networks or 'bridge' in networks_4.container.NetworkSettings.Networks"
# networks_5 has networks 'bridge', nname_1
- networks_5 is not changed
- networks_5.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_5.container.NetworkSettings.Networks
- "'default' in networks_5.container.NetworkSettings.Networks or 'bridge' in networks_5.container.NetworkSettings.Networks"
# networks_6 has networks 'bridge'
- networks_6 is changed
- networks_6.container.NetworkSettings.Networks | length == 1
- "'default' in networks_6.container.NetworkSettings.Networks or 'bridge' in networks_6.container.NetworkSettings.Networks"
# networks_7 has networks 'bridge', nname_2
- networks_7 is changed
- networks_7.container.NetworkSettings.Networks | length == 2
- nname_2 in networks_7.container.NetworkSettings.Networks
- "'default' in networks_7.container.NetworkSettings.Networks or 'bridge' in networks_7.container.NetworkSettings.Networks"
when: docker_py_version is version('1.10.0', '>=')
####################################################################
## networks for networks_cli_compatible=yes ########################
####################################################################
- block:
- name: networks_cli_compatible=yes, networks specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_1 }}"
aliases:
- alias1
- alias2
- name: "{{ nname_2 }}"
networks_cli_compatible: yes
register: networks_1
- name: networks_cli_compatible=yes, networks specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_1 }}"
- name: "{{ nname_2 }}"
networks_cli_compatible: yes
register: networks_2
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- name: networks_cli_compatible=yes, empty networks list specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
register: networks_3
- name: networks_cli_compatible=yes, empty networks list specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
register: networks_4
- name: networks_cli_compatible=yes, empty networks list specified, purge_networks
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
purge_networks: yes
force_kill: yes
register: networks_5
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- name: networks_cli_compatible=yes, networks not specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks_cli_compatible: yes
force_kill: yes
register: networks_6
- name: networks_cli_compatible=yes, networks not specified
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks_cli_compatible: yes
register: networks_7
- name: networks_cli_compatible=yes, networks not specified, purge_networks
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks_cli_compatible: yes
purge_networks: yes
force_kill: yes
register: networks_8
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- debug: var=networks_3
- assert:
that:
# networks_1 has networks nname_1, nname_2
- networks_1 is changed
- networks_1.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_1.container.NetworkSettings.Networks
- nname_2 in networks_1.container.NetworkSettings.Networks
# networks_2 has networks nname_1, nname_2
- networks_2 is not changed
- networks_2.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_2.container.NetworkSettings.Networks
- nname_2 in networks_1.container.NetworkSettings.Networks
# networks_3 has networks 'bridge'
- networks_3 is changed
- networks_3.container.NetworkSettings.Networks | length == 1
- "'default' in networks_3.container.NetworkSettings.Networks or 'bridge' in networks_3.container.NetworkSettings.Networks"
# networks_4 has networks 'bridge'
- networks_4 is not changed
- networks_4.container.NetworkSettings.Networks | length == 1
- "'default' in networks_4.container.NetworkSettings.Networks or 'bridge' in networks_4.container.NetworkSettings.Networks"
# networks_5 has no networks
- networks_5 is changed
- networks_5.container.NetworkSettings.Networks | length == 0
# networks_6 has networks 'bridge'
- networks_6 is changed
- networks_6.container.NetworkSettings.Networks | length == 1
- "'default' in networks_6.container.NetworkSettings.Networks or 'bridge' in networks_6.container.NetworkSettings.Networks"
# networks_7 has networks 'bridge'
- networks_7 is not changed
- networks_7.container.NetworkSettings.Networks | length == 1
- "'default' in networks_7.container.NetworkSettings.Networks or 'bridge' in networks_7.container.NetworkSettings.Networks"
# networks_8 has no networks
- networks_8 is changed
- networks_8.container.NetworkSettings.Networks | length == 0
when: docker_py_version is version('1.10.0', '>=')
####################################################################
## networks with comparisons #######################################
####################################################################
- block:
- name: create container with one network
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_1 }}"
networks_cli_compatible: yes
register: networks_1
- name: different networks, comparisons=ignore
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_2 }}"
networks_cli_compatible: yes
comparisons:
networks: ignore
register: networks_2
- name: less networks, comparisons=ignore
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
comparisons:
networks: ignore
register: networks_3
- name: less networks, comparisons=allow_more_present
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
comparisons:
networks: allow_more_present
register: networks_4
- name: different networks, comparisons=allow_more_present
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_2 }}"
networks_cli_compatible: yes
comparisons:
networks: allow_more_present
force_kill: yes
register: networks_5
- name: different networks, comparisons=strict
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks:
- name: "{{ nname_2 }}"
networks_cli_compatible: yes
comparisons:
networks: strict
force_kill: yes
register: networks_6
- name: less networks, comparisons=strict
docker_container:
image: alpine:3.8
command: '/bin/sh -c "sleep 10m"'
name: "{{ cname }}"
state: started
networks: []
networks_cli_compatible: yes
comparisons:
networks: strict
force_kill: yes
register: networks_7
- name: cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
# networks_1 has networks nname_1
- networks_1 is changed
- networks_1.container.NetworkSettings.Networks | length == 1
- nname_1 in networks_1.container.NetworkSettings.Networks
# networks_2 has networks nname_1
- networks_2 is not changed
- networks_2.container.NetworkSettings.Networks | length == 1
- nname_1 in networks_2.container.NetworkSettings.Networks
# networks_3 has networks nname_1
- networks_3 is not changed
- networks_3.container.NetworkSettings.Networks | length == 1
- nname_1 in networks_3.container.NetworkSettings.Networks
# networks_4 has networks nname_1
- networks_4 is not changed
- networks_4.container.NetworkSettings.Networks | length == 1
- nname_1 in networks_4.container.NetworkSettings.Networks
# networks_5 has networks nname_1, nname_2
- networks_5 is changed
- networks_5.container.NetworkSettings.Networks | length == 2
- nname_1 in networks_5.container.NetworkSettings.Networks
- nname_2 in networks_5.container.NetworkSettings.Networks
# networks_6 has networks nname_2
- networks_6 is changed
- networks_6.container.NetworkSettings.Networks | length == 1
- nname_2 in networks_6.container.NetworkSettings.Networks
# networks_7 has no networks
- networks_7 is changed
- networks_7.container.NetworkSettings.Networks | length == 0
when: docker_py_version is version('1.10.0', '>=')
####################################################################
####################################################################
####################################################################
- name: Delete networks
docker_network:
name: "{{ network_name }}"
state: absent
force: yes
loop:
- "{{ nname_1 }}"
- "{{ nname_2 }}"
loop_control:
loop_var: network_name
when: docker_py_version is version('1.10.0', '>=')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,775 |
Add 'restart' state/parameter to the ovirt_vm module
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add 'restart' state or restart option to the `ovirt_vm` module.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_vm
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/62775
|
https://github.com/ansible/ansible/pull/62785
|
44a6c69562ac8ec2b80de1362b6b7b00b581f725
|
9aff5f600733b485da71ba520641493e5f182cb1
| 2019-09-24T08:03:33Z |
python
| 2019-09-30T14:46:19Z |
lib/ansible/modules/cloud/ovirt/ovirt_vm.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_vm
short_description: Module to manage Virtual Machines in oVirt/RHV
version_added: "2.2"
author:
- Ondra Machacek (@machacekondra)
description:
- This module manages whole lifecycle of the Virtual Machine(VM) in oVirt/RHV.
- Since VM can hold many states in oVirt/RHV, this see notes to see how the states of the VM are handled.
options:
name:
description:
- Name of the Virtual Machine to manage.
- If VM don't exists C(name) is required. Otherwise C(id) or C(name) can be used.
id:
description:
- ID of the Virtual Machine to manage.
state:
description:
- Should the Virtual Machine be running/stopped/present/absent/suspended/next_run/registered/exported.
When C(state) is I(registered) and the unregistered VM's name
belongs to an already registered in engine VM in the same DC
then we fail to register the unregistered template.
- I(present) state will create/update VM and don't change its state if it already exists.
- I(running) state will create/update VM and start it.
- I(next_run) state updates the VM and if the VM has next run configuration it will be rebooted.
- Please check I(notes) to more detailed description of states.
- I(exported) state will export the VM to export domain or as OVA.
- I(registered) is supported since 2.4.
choices: [ absent, next_run, present, registered, running, stopped, suspended, exported ]
default: present
cluster:
description:
- Name of the cluster, where Virtual Machine should be created.
- Required if creating VM.
allow_partial_import:
description:
- Boolean indication whether to allow partial registration of Virtual Machine when C(state) is registered.
type: bool
version_added: "2.4"
vnic_profile_mappings:
description:
- "Mapper which maps an external virtual NIC profile to one that exists in the engine when C(state) is registered.
vnic_profile is described by the following dictionary:"
suboptions:
source_network_name:
description:
- The network name of the source network.
source_profile_name:
description:
- The profile name related to the source network.
target_profile_id:
description:
- The id of the target profile id to be mapped to in the engine.
version_added: "2.5"
cluster_mappings:
description:
- "Mapper which maps cluster name between VM's OVF and the destination cluster this VM should be registered to,
relevant when C(state) is registered.
Cluster mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source cluster.
dest_name:
description:
- The name of the destination cluster.
version_added: "2.5"
role_mappings:
description:
- "Mapper which maps role name between VM's OVF and the destination role this VM should be registered to,
relevant when C(state) is registered.
Role mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source role.
dest_name:
description:
- The name of the destination role.
version_added: "2.5"
domain_mappings:
description:
- "Mapper which maps aaa domain name between VM's OVF and the destination aaa domain this VM should be registered to,
relevant when C(state) is registered.
The aaa domain mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source aaa domain.
dest_name:
description:
- The name of the destination aaa domain.
version_added: "2.5"
affinity_group_mappings:
description:
- "Mapper which maps affinity name between VM's OVF and the destination affinity this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
affinity_label_mappings:
description:
- "Mapper which maps affinity label name between VM's OVF and the destination label this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
lun_mappings:
description:
- "Mapper which maps lun between VM's OVF and the destination lun this VM should contain, relevant when C(state) is registered.
lun_mappings is described by the following dictionary:
- C(logical_unit_id): The logical unit number to identify a logical unit,
- C(logical_unit_port): The port being used to connect with the LUN disk.
- C(logical_unit_portal): The portal being used to connect with the LUN disk.
- C(logical_unit_address): The address of the block storage host.
- C(logical_unit_target): The iSCSI specification located on an iSCSI server
- C(logical_unit_username): Username to be used to connect to the block storage host.
- C(logical_unit_password): Password to be used to connect to the block storage host.
- C(storage_type): The storage type which the LUN reside on (iscsi or fcp)"
version_added: "2.5"
reassign_bad_macs:
description:
- "Boolean indication whether to reassign bad macs when C(state) is registered."
type: bool
version_added: "2.5"
template:
description:
- Name of the template, which should be used to create Virtual Machine.
- Required if creating VM.
- If template is not specified and VM doesn't exist, VM will be created from I(Blank) template.
template_version:
description:
- Version number of the template to be used for VM.
- By default the latest available version of the template is used.
version_added: "2.3"
use_latest_template_version:
description:
- Specify if latest template version should be used, when running a stateless VM.
- If this parameter is set to I(yes) stateless VM is created.
type: bool
version_added: "2.3"
storage_domain:
description:
- Name of the storage domain where all template disks should be created.
- This parameter is considered only when C(template) is provided.
- IMPORTANT - This parameter is not idempotent, if the VM exists and you specify different storage domain,
disk won't move.
version_added: "2.4"
disk_format:
description:
- Specify format of the disk.
- If C(cow) format is used, disk will by created as sparse, so space will be allocated for the volume as needed, also known as I(thin provision).
- If C(raw) format is used, disk storage will be allocated right away, also known as I(preallocated).
- Note that this option isn't idempotent as it's not currently possible to change format of the disk via API.
- This parameter is considered only when C(template) and C(storage domain) is provided.
choices: [ cow, raw ]
default: cow
version_added: "2.4"
memory:
description:
- Amount of memory of the Virtual Machine. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
memory_guaranteed:
description:
- Amount of minimal guaranteed memory of the Virtual Machine.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- C(memory_guaranteed) parameter can't be lower than C(memory) parameter.
- Default value is set by engine.
memory_max:
description:
- Upper bound of virtual machine memory up to which memory hot-plug can be performed.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
version_added: "2.5"
cpu_shares:
description:
- Set a CPU shares for this Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_cores:
description:
- Number of virtual CPUs cores of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_sockets:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_threads:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
version_added: "2.5"
type:
description:
- Type of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- I(high_performance) is supported since Ansible 2.5 and oVirt/RHV 4.2.
choices: [ desktop, server, high_performance ]
quota_id:
description:
- "Virtual Machine quota ID to be used for disk. By default quota is chosen by oVirt/RHV engine."
version_added: "2.5"
operating_system:
description:
- Operating system of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- "Possible values: debian_7, freebsd, freebsdx64, other, other_linux,
other_linux_ppc64, other_ppc64, rhel_3, rhel_4, rhel_4x64, rhel_5, rhel_5x64,
rhel_6, rhel_6x64, rhel_6_ppc64, rhel_7x64, rhel_7_ppc64, sles_11, sles_11_ppc64,
ubuntu_12_04, ubuntu_12_10, ubuntu_13_04, ubuntu_13_10, ubuntu_14_04, ubuntu_14_04_ppc64,
windows_10, windows_10x64, windows_2003, windows_2003x64, windows_2008, windows_2008x64,
windows_2008r2x64, windows_2008R2x64, windows_2012x64, windows_2012R2x64, windows_7,
windows_7x64, windows_8, windows_8x64, windows_xp"
boot_devices:
description:
- List of boot devices which should be used to boot. For example C([ cdrom, hd ]).
- Default value is set by oVirt/RHV engine.
choices: [ cdrom, hd, network ]
boot_menu:
description:
- "I(True) enable menu to select boot device, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
usb_support:
description:
- "I(True) enable USB support, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
serial_console:
description:
- "I(True) enable VirtIO serial console, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
sso:
description:
- "I(True) enable Single Sign On by Guest Agent, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
host:
description:
- Specify host where Virtual Machine should be running. By default the host is chosen by engine scheduler.
- This parameter is used only when C(state) is I(running) or I(present).
high_availability:
description:
- If I(yes) Virtual Machine will be set as highly available.
- If I(no) Virtual Machine won't be set as highly available.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
high_availability_priority:
description:
- Indicates the priority of the virtual machine inside the run and migration queues.
Virtual machines with higher priorities will be started and migrated before virtual machines with lower
priorities. The value is an integer between 0 and 100. The higher the value, the higher the priority.
- If no value is passed, default value is set by oVirt/RHV engine.
version_added: "2.5"
lease:
description:
- Name of the storage domain this virtual machine lease reside on. Pass an empty string to remove the lease.
- NOTE - Supported since oVirt 4.1.
version_added: "2.4"
custom_compatibility_version:
description:
- "Enables a virtual machine to be customized to its own compatibility version. If
'C(custom_compatibility_version)' is set, it overrides the cluster's compatibility version
for this particular virtual machine."
version_added: "2.7"
host_devices:
description:
- Single Root I/O Virtualization - technology that allows single device to expose multiple endpoints that can be passed to VMs
- host_devices is an list which contain dictionary with name and state of device
version_added: "2.7"
delete_protected:
description:
- If I(yes) Virtual Machine will be set as delete protected.
- If I(no) Virtual Machine won't be set as delete protected.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
stateless:
description:
- If I(yes) Virtual Machine will be set as stateless.
- If I(no) Virtual Machine will be unset as stateless.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
clone:
description:
- If I(yes) then the disks of the created virtual machine will be cloned and independent of the template.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
clone_permissions:
description:
- If I(yes) then the permissions of the template (only the direct ones, not the inherited ones)
will be copied to the created virtual machine.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
cd_iso:
description:
- ISO file from ISO storage domain which should be attached to Virtual Machine.
- If you pass empty string the CD will be ejected from VM.
- If used with C(state) I(running) or I(present) and VM is running the CD will be attached to VM.
- If used with C(state) I(running) or I(present) and VM is down the CD will be attached to VM persistently.
force:
description:
- Please check to I(Synopsis) to more detailed description of force parameter, it can behave differently
in different situations.
type: bool
default: 'no'
nics:
description:
- List of NICs, which should be attached to Virtual Machine. NIC is described by following dictionary.
suboptions:
name:
description:
- Name of the NIC.
profile_name:
description:
- Profile name where NIC should be attached.
interface:
description:
- Type of the network interface.
choices: ['virtio', 'e1000', 'rtl8139']
default: 'virtio'
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only create NICs.
To manage NICs of the VM in more depth please use M(ovirt_nic) module instead."
disks:
description:
- List of disks, which should be attached to Virtual Machine. Disk is described by following dictionary.
suboptions:
name:
description:
- Name of the disk. Either C(name) or C(id) is required.
id:
description:
- ID of the disk. Either C(name) or C(id) is required.
interface:
description:
- Interface of the disk.
choices: ['virtio', 'ide']
default: 'virtio'
bootable:
description:
- I(True) if the disk should be bootable, default is non bootable.
type: bool
activate:
description:
- I(True) if the disk should be activated, default is activated.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only attach disks.
To manage disks of the VM in more depth please use M(ovirt_disk) module instead."
type: bool
sysprep:
description:
- Dictionary with values for Windows Virtual Machine initialization using sysprep.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
active_directory_ou:
description:
- Active Directory Organizational Unit, to be used for login of user.
org_name:
description:
- Organization name to be set to Windows Virtual Machine.
domain:
description:
- Domain to be set to Windows Virtual Machine.
timezone:
description:
- Timezone to be set to Windows Virtual Machine.
ui_language:
description:
- UI language of the Windows Virtual Machine.
system_locale:
description:
- System localization of the Windows Virtual Machine.
input_locale:
description:
- Input localization of the Windows Virtual Machine.
windows_license_key:
description:
- License key to be set to Windows Virtual Machine.
user_name:
description:
- Username to be used for set password to Windows Virtual Machine.
root_password:
description:
- Password to be set for username to Windows Virtual Machine.
cloud_init:
description:
- Dictionary with values for Unix-like Virtual Machine initialization using cloud init.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
timezone:
description:
- Timezone to be set to Virtual Machine when deployed.
user_name:
description:
- Username to be used to set password to Virtual Machine when deployed.
root_password:
description:
- Password to be set for user specified by C(user_name) parameter.
authorized_ssh_keys:
description:
- Use this SSH keys to login to Virtual Machine.
regenerate_ssh_keys:
description:
- If I(True) SSH keys will be regenerated on Virtual Machine.
type: bool
custom_script:
description:
- Cloud-init script which will be executed on Virtual Machine when deployed.
- This is appended to the end of the cloud-init script generated by any other options.
dns_servers:
description:
- DNS servers to be configured on Virtual Machine.
dns_search:
description:
- DNS search domains to be configured on Virtual Machine.
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
cloud_init_nics:
description:
- List of dictionaries representing network interfaces to be setup by cloud init.
- This option is used, when user needs to setup more network interfaces via cloud init.
- If one network interface is enough, user should use C(cloud_init) I(nic_*) parameters. C(cloud_init) I(nic_*) parameters
are merged with C(cloud_init_nics) parameters.
suboptions:
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.3"
cloud_init_persist:
description:
- "If I(yes) the C(cloud_init) or C(sysprep) parameters will be saved for the virtual machine
and the virtual machine won't be started as run-once."
type: bool
version_added: "2.5"
aliases: [ 'sysprep_persist' ]
default: 'no'
kernel_params_persist:
description:
- "If I(true) C(kernel_params), C(initrd_path) and C(kernel_path) will persist in virtual machine configuration,
if I(False) it will be used for run once."
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
type: bool
version_added: "2.8"
kernel_path:
description:
- Path to a kernel image used to boot the virtual machine.
- Kernel image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
initrd_path:
description:
- Path to an initial ramdisk to be used with the kernel specified by C(kernel_path) option.
- Ramdisk image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
kernel_params:
description:
- Kernel command line parameters (formatted as string) to be used with the kernel specified by C(kernel_path) option.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
instance_type:
description:
- Name of virtual machine's hardware configuration.
- By default no instance type is used.
version_added: "2.3"
description:
description:
- Description of the Virtual Machine.
version_added: "2.3"
comment:
description:
- Comment of the Virtual Machine.
version_added: "2.3"
timezone:
description:
- Sets time zone offset of the guest hardware clock.
- For example C(Etc/GMT)
version_added: "2.3"
serial_policy:
description:
- Specify a serial number policy for the Virtual Machine.
- Following options are supported.
- C(vm) - Sets the Virtual Machine's UUID as its serial number.
- C(host) - Sets the host's UUID as the Virtual Machine's serial number.
- C(custom) - Allows you to specify a custom serial number in C(serial_policy_value).
choices: ['vm', 'host', 'custom']
version_added: "2.3"
serial_policy_value:
description:
- Allows you to specify a custom serial number.
- This parameter is used only when C(serial_policy) is I(custom).
version_added: "2.3"
vmware:
description:
- Dictionary of values to be used to connect to VMware and import
a virtual machine to oVirt.
suboptions:
username:
description:
- The username to authenticate against the VMware.
password:
description:
- The password to authenticate against the VMware.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1)
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
xen:
description:
- Dictionary of values to be used to connect to XEN and import
a virtual machine to oVirt.
suboptions:
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(xen+ssh://[email protected]). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
kvm:
description:
- Dictionary of values to be used to connect to kvm and import
a virtual machine to oVirt.
suboptions:
name:
description:
- The name of the KVM virtual machine.
username:
description:
- The username to authenticate against the KVM.
password:
description:
- The password to authenticate against the KVM.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(qemu:///system). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
cpu_mode:
description:
- "CPU mode of the virtual machine. It can be some of the following: I(host_passthrough), I(host_model) or I(custom)."
- "For I(host_passthrough) CPU type you need to set C(placement_policy) to I(pinned)."
- "If no value is passed, default value is set by oVirt/RHV engine."
version_added: "2.5"
placement_policy:
description:
- "The configuration of the virtual machine's placement policy."
- "If no value is passed, default value is set by oVirt/RHV engine."
- "Placement policy can be one of the following values:"
suboptions:
migratable:
description:
- "Allow manual and automatic migration."
pinned:
description:
- "Do not allow migration."
user_migratable:
description:
- "Allow manual migration only."
version_added: "2.5"
ticket:
description:
- "If I(true), in addition return I(remote_vv_file) inside I(vm) dictionary, which contains compatible
content for remote-viewer application. Works only C(state) is I(running)."
version_added: "2.7"
type: bool
cpu_pinning:
description:
- "CPU Pinning topology to map virtual machine CPU to host CPU."
- "CPU Pinning topology is a list of dictionary which can have following values:"
suboptions:
cpu:
description:
- "Number of the host CPU."
vcpu:
description:
- "Number of the virtual machine CPU."
version_added: "2.5"
soundcard_enabled:
description:
- "If I(true), the sound card is added to the virtual machine."
type: bool
version_added: "2.5"
smartcard_enabled:
description:
- "If I(true), use smart card authentication."
type: bool
version_added: "2.5"
io_threads:
description:
- "Number of IO threads used by virtual machine. I(0) means IO threading disabled."
version_added: "2.5"
ballooning_enabled:
description:
- "If I(true), use memory ballooning."
- "Memory balloon is a guest device, which may be used to re-distribute / reclaim the host memory
based on VM needs in a dynamic way. In this way it's possible to create memory over commitment states."
type: bool
version_added: "2.5"
numa_tune_mode:
description:
- "Set how the memory allocation for NUMA nodes of this VM is applied (relevant if NUMA nodes are set for this VM)."
- "It can be one of the following: I(interleave), I(preferred) or I(strict)."
- "If no value is passed, default value is set by oVirt/RHV engine."
choices: ['interleave', 'preferred', 'strict']
version_added: "2.6"
numa_nodes:
description:
- "List of vNUMA Nodes to set for this VM and pin them to assigned host's physical NUMA node."
- "Each vNUMA node is described by following dictionary:"
suboptions:
index:
description:
- "The index of this NUMA node (mandatory)."
memory:
description:
- "Memory size of the NUMA node in MiB (mandatory)."
cores:
description:
- "list of VM CPU cores indexes to be included in this NUMA node (mandatory)."
numa_node_pins:
description:
- "list of physical NUMA node indexes to pin this virtual NUMA node to."
version_added: "2.6"
rng_device:
description:
- "Random number generator (RNG). You can choose of one the following devices I(urandom), I(random) or I(hwrng)."
- "In order to select I(hwrng), you must have it enabled on cluster first."
- "/dev/urandom is used for cluster version >= 4.1, and /dev/random for cluster version <= 4.0"
version_added: "2.5"
custom_properties:
description:
- "Properties sent to VDSM to configure various hooks."
- "Custom properties is a list of dictionary which can have following values:"
suboptions:
name:
description:
- "Name of the custom property. For example: I(hugepages), I(vhost), I(sap_agent), etc."
regexp:
description:
- "Regular expression to set for custom property."
value:
description:
- "Value to set for custom property."
version_added: "2.5"
watchdog:
description:
- "Assign watchdog device for the virtual machine."
- "Watchdogs is a dictionary which can have following values:"
suboptions:
model:
description:
- "Model of the watchdog device. For example: I(i6300esb), I(diag288) or I(null)."
action:
description:
- "Watchdog action to be performed when watchdog is triggered. For example: I(none), I(reset), I(poweroff), I(pause) or I(dump)."
version_added: "2.5"
graphical_console:
description:
- "Assign graphical console to the virtual machine."
suboptions:
headless_mode:
description:
- If I(true) disable the graphics console for this virtual machine.
type: bool
protocol:
description:
- Graphical protocol, a list of I(spice), I(vnc), or both.
version_added: "2.5"
exclusive:
description:
- "When C(state) is I(exported) this parameter indicates if the existing VM with the
same name should be overwritten."
version_added: "2.8"
type: bool
export_domain:
description:
- "When C(state) is I(exported)this parameter specifies the name of the export storage domain."
version_added: "2.8"
export_ova:
description:
- Dictionary of values to be used to export VM as OVA.
suboptions:
host:
description:
- The name of the destination host where the OVA has to be exported.
directory:
description:
- The name of the directory where the OVA has to be exported.
filename:
description:
- The name of the exported OVA file.
version_added: "2.8"
force_migrate:
description:
- If I(true), the VM will migrate when I(placement_policy=user-migratable) but not when I(placement_policy=pinned).
version_added: "2.8"
type: bool
migrate:
description:
- "If I(true), the VM will migrate to any available host."
version_added: "2.8"
type: bool
next_run:
description:
- "If I(true), the update will not be applied to the VM immediately and will be only applied when virtual machine is restarted."
- NOTE - If there are multiple next run configuration changes on the VM, the first change may get reverted if this option is not passed.
version_added: "2.8"
type: bool
snapshot_name:
description:
- "Snapshot to clone VM from."
- "Snapshot with description specified should exist."
- "You have to specify C(snapshot_vm) parameter with virtual machine name of this snapshot."
version_added: "2.9"
snapshot_vm:
description:
- "Source VM to clone VM from."
- "VM should have snapshot specified by C(snapshot)."
- "If C(snapshot_name) specified C(snapshot_vm) is required."
version_added: "2.9"
custom_emulated_machine:
description:
- "Sets the value of the custom_emulated_machine attribute."
version_added: "2.10"
notes:
- If VM is in I(UNASSIGNED) or I(UNKNOWN) state before any operation, the module will fail.
If VM is in I(IMAGE_LOCKED) state before any operation, we try to wait for VM to be I(DOWN).
If VM is in I(SAVING_STATE) state before any operation, we try to wait for VM to be I(SUSPENDED).
If VM is in I(POWERING_DOWN) state before any operation, we try to wait for VM to be I(UP) or I(DOWN). VM can
get into I(UP) state from I(POWERING_DOWN) state, when there is no ACPI or guest agent running inside VM, or
if the shutdown operation fails.
When user specify I(UP) C(state), we always wait to VM to be in I(UP) state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). In other states we run start operation on VM.
When user specify I(stopped) C(state), and If user pass C(force) parameter set to I(true) we forcibly stop the VM in
any state. If user don't pass C(force) parameter, we always wait to VM to be in UP state in case VM is
I(MIGRATING), I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or
I(SUSPENDED) state, we start the VM. Then we gracefully shutdown the VM.
When user specify I(suspended) C(state), we always wait to VM to be in UP state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or I(DOWN) state,
we start the VM. Then we suspend the VM.
When user specify I(absent) C(state), we forcibly stop the VM in any state and remove it.
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
- name: Creates a new Virtual Machine from template named 'rhel7_template'
ovirt_vm:
state: present
name: myvm
template: rhel7_template
cluster: mycluster
- name: Register VM
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
name: myvm
- name: Register VM using id
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM, allowing partial import
ovirt_vm:
state: registered
storage_domain: mystorage
allow_partial_import: "True"
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM with vnic profile mappings and reassign bad macs
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
vnic_profile_mappings:
- source_network_name: mynetwork
source_profile_name: mynetwork
target_profile_id: 3333-3333-3333-3333
- source_network_name: mynetwork2
source_profile_name: mynetwork2
target_profile_id: 4444-4444-4444-4444
reassign_bad_macs: "True"
- name: Register VM with mappings
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
role_mappings:
- source_name: Role_A
dest_name: Role_B
domain_mappings:
- source_name: Domain_A
dest_name: Domain_B
lun_mappings:
- source_storage_type: iscsi
source_logical_unit_id: 1IET_000d0001
source_logical_unit_port: 3260
source_logical_unit_portal: 1
source_logical_unit_address: 10.34.63.203
source_logical_unit_target: iqn.2016-08-09.brq.str-01:omachace
dest_storage_type: iscsi
dest_logical_unit_id: 1IET_000d0002
dest_logical_unit_port: 3260
dest_logical_unit_portal: 1
dest_logical_unit_address: 10.34.63.204
dest_logical_unit_target: iqn.2016-08-09.brq.str-02:omachace
affinity_group_mappings:
- source_name: Affinity_A
dest_name: Affinity_B
affinity_label_mappings:
- source_name: Label_A
dest_name: Label_B
cluster_mappings:
- source_name: cluster_A
dest_name: cluster_B
- name: Creates a stateless VM which will always use latest template version
ovirt_vm:
name: myvm
template: rhel7
cluster: mycluster
use_latest_template_version: true
# Creates a new server rhel7 Virtual Machine from Blank template
# on brq01 cluster with 2GiB memory and 2 vcpu cores/sockets
# and attach bootable disk with name rhel7_disk and attach virtio NIC
- ovirt_vm:
state: present
cluster: brq01
name: myvm
memory: 2GiB
cpu_cores: 2
cpu_sockets: 2
cpu_shares: 1024
type: server
operating_system: rhel_7x64
disks:
- name: rhel7_disk
bootable: True
nics:
- name: nic1
# Change VM Name
- ovirt_vm:
id: 00000000-0000-0000-0000-000000000000
name: "new_vm_name"
- name: Run VM with cloud init
ovirt_vm:
name: rhel7
template: rhel7
cluster: Default
memory: 1GiB
high_availability: true
high_availability_priority: 50 # Available from Ansible 2.5
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_name: eth1
nic_on_boot: true
host_name: example.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: super_password
- name: Run VM with cloud init, with multiple network interfaces
ovirt_vm:
name: rhel7_4
template: rhel7
cluster: mycluster
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
- nic_name: eth1
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_on_boot: true
# IP version 6 parameters are supported since ansible 2.9
- nic_name: eth2
nic_boot_protocol_v6: static
nic_ip_address_v6: '2620:52:0:2282:b898:1f69:6512:36c5'
nic_gateway_v6: '2620:52:0:2282:b898:1f69:6512:36c9'
nic_netmask_v6: '120'
nic_on_boot: true
- nic_name: eth3
nic_on_boot: true
nic_boot_protocol_v6: dhcp
- name: Run VM with sysprep
ovirt_vm:
name: windows2012R2_AD
template: windows2012R2
cluster: Default
memory: 3GiB
high_availability: true
sysprep:
host_name: windowsad.example.com
user_name: Administrator
root_password: SuperPassword123
- name: Migrate/Run VM to/on host named 'host1'
ovirt_vm:
state: running
name: myvm
host: host1
- name: Migrate VM to any available host
ovirt_vm:
state: running
name: myvm
migrate: true
- name: Change VMs CD
ovirt_vm:
name: myvm
cd_iso: drivers.iso
- name: Eject VMs CD
ovirt_vm:
name: myvm
cd_iso: ''
- name: Boot VM from CD
ovirt_vm:
name: myvm
cd_iso: centos7_x64.iso
boot_devices:
- cdrom
- name: Stop vm
ovirt_vm:
state: stopped
name: myvm
- name: Upgrade memory to already created VM
ovirt_vm:
name: myvm
memory: 4GiB
- name: Hot plug memory to already created and running VM (VM won't be restarted)
ovirt_vm:
name: myvm
memory: 4GiB
# Create/update a VM to run with two vNUMA nodes and pin them to physical NUMA nodes as follows:
# vnuma index 0-> numa index 0, vnuma index 1-> numa index 1
- name: Create a VM to run with two vNUMA nodes
ovirt_vm:
name: myvm
cluster: mycluster
numa_tune_mode: "interleave"
numa_nodes:
- index: 0
cores: [0]
memory: 20
numa_node_pins: [0]
- index: 1
cores: [1]
memory: 30
numa_node_pins: [1]
- name: Update an existing VM to run without previously created vNUMA nodes (i.e. remove all vNUMA nodes+NUMA pinning setting)
ovirt_vm:
name: myvm
cluster: mycluster
state: "present"
numa_tune_mode: "interleave"
numa_nodes:
- index: -1
# When change on the VM needs restart of the VM, use next_run state,
# The VM will be updated and rebooted if there are any changes.
# If present state would be used, VM won't be restarted.
- ovirt_vm:
state: next_run
name: myvm
boot_devices:
- network
- name: Import virtual machine from VMware
ovirt_vm:
state: stopped
cluster: mycluster
name: vmware_win10
timeout: 1800
poll_interval: 30
vmware:
url: vpx://[email protected]/Folder1/Cluster1/2.3.4.5?no_verify=1
name: windows10
storage_domain: mynfs
username: user
password: password
- name: Create vm from template and create all disks on specific storage domain
ovirt_vm:
name: vm_test
cluster: mycluster
template: mytemplate
storage_domain: mynfs
nics:
- name: nic1
- name: Remove VM, if VM is running it will be stopped
ovirt_vm:
state: absent
name: myvm
# Defining a specific quota for a VM:
# Since Ansible 2.5
- ovirt_quotas_facts:
data_center: Default
name: myquota
- ovirt_vm:
name: myvm
sso: False
boot_menu: True
usb_support: True
serial_console: True
quota_id: "{{ ovirt_quotas[0]['id'] }}"
- name: Create a VM that has the console configured for both Spice and VNC
ovirt_vm:
name: myvm
template: mytemplate
cluster: mycluster
graphical_console:
protocol:
- spice
- vnc
# Execute remote viewer to VM
- block:
- name: Create a ticket for console for a running VM
ovirt_vm:
name: myvm
ticket: true
state: running
register: myvm
- name: Save ticket to file
copy:
content: "{{ myvm.vm.remote_vv_file }}"
dest: ~/vvfile.vv
- name: Run remote viewer with file
command: remote-viewer ~/vvfile.vv
# Default value of host_device state is present
- name: Attach host devices to virtual machine
ovirt_vm:
name: myvm
host: myhost
placement_policy: pinned
host_devices:
- name: pci_0000_00_06_0
- name: pci_0000_00_07_0
state: absent
- name: pci_0000_00_08_0
state: present
- name: Export the VM as OVA
ovirt_vm:
name: myvm
state: exported
cluster: mycluster
export_ova:
host: myhost
filename: myvm.ova
directory: /tmp/
- name: Clone VM from snapshot
ovirt_vm:
snapshot_vm: myvm
snapshot_name: myvm_snap
name: myvm_clone
state: present
'''
RETURN = '''
id:
description: ID of the VM which is managed
returned: On success if VM is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
vm:
description: "Dictionary of all the VM attributes. VM attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/vm.
Additionally when user sent ticket=true, this module will return also remote_vv_file
parameter in vm dictionary, which contains remote-viewer compatible file to open virtual
machine console. Please note that this file contains sensible information."
returned: On success if VM is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_params,
check_sdk,
convert_to_bytes,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
get_id_by_name,
ovirt_full_argument_spec,
search_by_attributes,
search_by_name,
wait,
engine_supported,
)
class VmsModule(BaseModule):
def __init__(self, *args, **kwargs):
super(VmsModule, self).__init__(*args, **kwargs)
self._initialization = None
self._is_new = False
def __get_template_with_version(self):
"""
oVirt/RHV in version 4.1 doesn't support search by template+version_number,
so we need to list all templates with specific name and then iterate
through it's version until we find the version we look for.
"""
template = None
templates_service = self._connection.system_service().templates_service()
if self.param('template'):
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
data_center = self._connection.follow_link(cluster.data_center)
templates = templates_service.list(
search='name=%s and datacenter=%s' % (self.param('template'), data_center.name)
)
if self.param('template_version'):
templates = [
t for t in templates
if t.version.version_number == self.param('template_version')
]
if not templates:
raise ValueError(
"Template with name '%s' and version '%s' in data center '%s' was not found'" % (
self.param('template'),
self.param('template_version'),
data_center.name
)
)
template = sorted(templates, key=lambda t: t.version.version_number, reverse=True)[0]
elif self._is_new:
# If template isn't specified and VM is about to be created specify default template:
template = templates_service.template_service('00000000-0000-0000-0000-000000000000').get()
return template
def __get_storage_domain_and_all_template_disks(self, template):
if self.param('template') is None:
return None
if self.param('storage_domain') is None:
return None
disks = list()
for att in self._connection.follow_link(template.disk_attachments):
disks.append(
otypes.DiskAttachment(
disk=otypes.Disk(
id=att.disk.id,
format=otypes.DiskFormat(self.param('disk_format')),
storage_domains=[
otypes.StorageDomain(
id=get_id_by_name(
self._connection.system_service().storage_domains_service(),
self.param('storage_domain')
)
)
]
)
)
)
return disks
def __get_snapshot(self):
if self.param('snapshot_vm') is None:
return None
if self.param('snapshot_name') is None:
return None
vms_service = self._connection.system_service().vms_service()
vm_id = get_id_by_name(vms_service, self.param('snapshot_vm'))
vm_service = vms_service.vm_service(vm_id)
snaps_service = vm_service.snapshots_service()
snaps = snaps_service.list()
snap = next(
(s for s in snaps if s.description == self.param('snapshot_name')),
None
)
return snap
def __get_cluster(self):
if self.param('cluster') is not None:
return self.param('cluster')
elif self.param('snapshot_name') is not None and self.param('snapshot_vm') is not None:
vms_service = self._connection.system_service().vms_service()
vm = search_by_name(vms_service, self.param('snapshot_vm'))
return self._connection.system_service().clusters_service().cluster_service(vm.cluster.id).get().name
def build_entity(self):
template = self.__get_template_with_version()
cluster = self.__get_cluster()
snapshot = self.__get_snapshot()
disk_attachments = self.__get_storage_domain_and_all_template_disks(template)
return otypes.Vm(
id=self.param('id'),
name=self.param('name'),
cluster=otypes.Cluster(
name=cluster
) if cluster else None,
disk_attachments=disk_attachments,
template=otypes.Template(
id=template.id,
) if template else None,
use_latest_template_version=self.param('use_latest_template_version'),
stateless=self.param('stateless') or self.param('use_latest_template_version'),
delete_protected=self.param('delete_protected'),
custom_emulated_machine=self.param('custom_emulated_machine'),
bios=(
otypes.Bios(boot_menu=otypes.BootMenu(enabled=self.param('boot_menu')))
) if self.param('boot_menu') is not None else None,
console=(
otypes.Console(enabled=self.param('serial_console'))
) if self.param('serial_console') is not None else None,
usb=(
otypes.Usb(enabled=self.param('usb_support'))
) if self.param('usb_support') is not None else None,
sso=(
otypes.Sso(
methods=[otypes.Method(id=otypes.SsoMethod.GUEST_AGENT)] if self.param('sso') else []
)
) if self.param('sso') is not None else None,
quota=otypes.Quota(id=self._module.params.get('quota_id')) if self.param('quota_id') is not None else None,
high_availability=otypes.HighAvailability(
enabled=self.param('high_availability'),
priority=self.param('high_availability_priority'),
) if self.param('high_availability') is not None or self.param('high_availability_priority') else None,
lease=otypes.StorageDomainLease(
storage_domain=otypes.StorageDomain(
id=get_id_by_name(
service=self._connection.system_service().storage_domains_service(),
name=self.param('lease')
) if self.param('lease') else None
)
) if self.param('lease') is not None else None,
cpu=otypes.Cpu(
topology=otypes.CpuTopology(
cores=self.param('cpu_cores'),
sockets=self.param('cpu_sockets'),
threads=self.param('cpu_threads'),
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads')
)) else None,
cpu_tune=otypes.CpuTune(
vcpu_pins=[
otypes.VcpuPin(vcpu=int(pin['vcpu']), cpu_set=str(pin['cpu'])) for pin in self.param('cpu_pinning')
],
) if self.param('cpu_pinning') else None,
mode=otypes.CpuMode(self.param('cpu_mode')) if self.param('cpu_mode') else None,
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads'),
self.param('cpu_mode'),
self.param('cpu_pinning')
)) else None,
cpu_shares=self.param('cpu_shares'),
os=otypes.OperatingSystem(
type=self.param('operating_system'),
boot=otypes.Boot(
devices=[
otypes.BootDevice(dev) for dev in self.param('boot_devices')
],
) if self.param('boot_devices') else None,
cmdline=self.param('kernel_params') if self.param('kernel_params_persist') else None,
initrd=self.param('initrd_path') if self.param('kernel_params_persist') else None,
kernel=self.param('kernel_path') if self.param('kernel_params_persist') else None,
) if (
self.param('operating_system') or self.param('boot_devices') or self.param('kernel_params_persist')
) else None,
type=otypes.VmType(
self.param('type')
) if self.param('type') else None,
memory=convert_to_bytes(
self.param('memory')
) if self.param('memory') else None,
memory_policy=otypes.MemoryPolicy(
guaranteed=convert_to_bytes(self.param('memory_guaranteed')),
ballooning=self.param('ballooning_enabled'),
max=convert_to_bytes(self.param('memory_max')),
) if any((
self.param('memory_guaranteed'),
self.param('ballooning_enabled') is not None,
self.param('memory_max')
)) else None,
instance_type=otypes.InstanceType(
id=get_id_by_name(
self._connection.system_service().instance_types_service(),
self.param('instance_type'),
),
) if self.param('instance_type') else None,
custom_compatibility_version=otypes.Version(
major=self._get_major(self.param('custom_compatibility_version')),
minor=self._get_minor(self.param('custom_compatibility_version')),
) if self.param('custom_compatibility_version') is not None else None,
description=self.param('description'),
comment=self.param('comment'),
time_zone=otypes.TimeZone(
name=self.param('timezone'),
) if self.param('timezone') else None,
serial_number=otypes.SerialNumber(
policy=otypes.SerialNumberPolicy(self.param('serial_policy')),
value=self.param('serial_policy_value'),
) if (
self.param('serial_policy') is not None or
self.param('serial_policy_value') is not None
) else None,
placement_policy=otypes.VmPlacementPolicy(
affinity=otypes.VmAffinity(self.param('placement_policy')),
hosts=[
otypes.Host(name=self.param('host')),
] if self.param('host') else None,
) if self.param('placement_policy') else None,
soundcard_enabled=self.param('soundcard_enabled'),
display=otypes.Display(
smartcard_enabled=self.param('smartcard_enabled')
) if self.param('smartcard_enabled') is not None else None,
io=otypes.Io(
threads=self.param('io_threads'),
) if self.param('io_threads') is not None else None,
numa_tune_mode=otypes.NumaTuneMode(
self.param('numa_tune_mode')
) if self.param('numa_tune_mode') else None,
rng_device=otypes.RngDevice(
source=otypes.RngSource(self.param('rng_device')),
) if self.param('rng_device') else None,
custom_properties=[
otypes.CustomProperty(
name=cp.get('name'),
regexp=cp.get('regexp'),
value=str(cp.get('value')),
) for cp in self.param('custom_properties') if cp
] if self.param('custom_properties') is not None else None,
initialization=self.get_initialization() if self.param('cloud_init_persist') else None,
snapshots=[otypes.Snapshot(id=snapshot.id)] if snapshot is not None else None,
)
def _get_export_domain_service(self):
provider_name = self._module.params['export_domain']
export_sds_service = self._connection.system_service().storage_domains_service()
export_sd_id = get_id_by_name(export_sds_service, provider_name)
return export_sds_service.service(export_sd_id)
def post_export_action(self, entity):
self._service = self._get_export_domain_service().vms_service()
def update_check(self, entity):
res = self._update_check(entity)
if entity.next_run_configuration_exists:
res = res and self._update_check(self._service.service(entity.id).get(next_run=True))
return res
def _update_check(self, entity):
def check_cpu_pinning():
if self.param('cpu_pinning'):
current = []
if entity.cpu.cpu_tune:
current = [(str(pin.cpu_set), int(pin.vcpu)) for pin in entity.cpu.cpu_tune.vcpu_pins]
passed = [(str(pin['cpu']), int(pin['vcpu'])) for pin in self.param('cpu_pinning')]
return sorted(current) == sorted(passed)
return True
def check_custom_properties():
if self.param('custom_properties'):
current = []
if entity.custom_properties:
current = [(cp.name, cp.regexp, str(cp.value)) for cp in entity.custom_properties]
passed = [(cp.get('name'), cp.get('regexp'), str(cp.get('value'))) for cp in self.param('custom_properties') if cp]
return sorted(current) == sorted(passed)
return True
def check_host():
if self.param('host') is not None:
return self.param('host') in [self._connection.follow_link(host).name for host in getattr(entity.placement_policy, 'hosts', None) or []]
return True
def check_custom_compatibility_version():
if self.param('custom_compatibility_version') is not None:
return (self._get_minor(self.param('custom_compatibility_version')) == self._get_minor(entity.custom_compatibility_version) and
self._get_major(self.param('custom_compatibility_version')) == self._get_major(entity.custom_compatibility_version))
return True
cpu_mode = getattr(entity.cpu, 'mode')
vm_display = entity.display
return (
check_cpu_pinning() and
check_custom_properties() and
check_host() and
check_custom_compatibility_version() and
not self.param('cloud_init_persist') and
not self.param('kernel_params_persist') and
equal(self.param('cluster'), get_link_name(self._connection, entity.cluster)) and equal(convert_to_bytes(self.param('memory')), entity.memory) and
equal(convert_to_bytes(self.param('memory_guaranteed')), entity.memory_policy.guaranteed) and
equal(convert_to_bytes(self.param('memory_max')), entity.memory_policy.max) and
equal(self.param('cpu_cores'), entity.cpu.topology.cores) and
equal(self.param('cpu_sockets'), entity.cpu.topology.sockets) and
equal(self.param('cpu_threads'), entity.cpu.topology.threads) and
equal(self.param('cpu_mode'), str(cpu_mode) if cpu_mode else None) and
equal(self.param('type'), str(entity.type)) and
equal(self.param('name'), str(entity.name)) and
equal(self.param('operating_system'), str(entity.os.type)) and
equal(self.param('boot_menu'), entity.bios.boot_menu.enabled) and
equal(self.param('soundcard_enabled'), entity.soundcard_enabled) and
equal(self.param('smartcard_enabled'), getattr(vm_display, 'smartcard_enabled', False)) and
equal(self.param('io_threads'), entity.io.threads) and
equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and
equal(self.param('serial_console'), getattr(entity.console, 'enabled', None)) and
equal(self.param('usb_support'), entity.usb.enabled) and
equal(self.param('sso'), True if entity.sso.methods else False) and
equal(self.param('quota_id'), getattr(entity.quota, 'id', None)) and
equal(self.param('high_availability'), entity.high_availability.enabled) and
equal(self.param('high_availability_priority'), entity.high_availability.priority) and
equal(self.param('lease'), get_link_name(self._connection, getattr(entity.lease, 'storage_domain', None))) and
equal(self.param('stateless'), entity.stateless) and
equal(self.param('cpu_shares'), entity.cpu_shares) and
equal(self.param('delete_protected'), entity.delete_protected) and
equal(self.param('custom_emulated_machine'), entity.custom_emulated_machine) and
equal(self.param('use_latest_template_version'), entity.use_latest_template_version) and
equal(self.param('boot_devices'), [str(dev) for dev in getattr(entity.os.boot, 'devices', [])]) and
equal(self.param('instance_type'), get_link_name(self._connection, entity.instance_type), ignore_case=True) and
equal(self.param('description'), entity.description) and
equal(self.param('comment'), entity.comment) and
equal(self.param('timezone'), getattr(entity.time_zone, 'name', None)) and
equal(self.param('serial_policy'), str(getattr(entity.serial_number, 'policy', None))) and
equal(self.param('serial_policy_value'), getattr(entity.serial_number, 'value', None)) and
equal(self.param('placement_policy'), str(entity.placement_policy.affinity) if entity.placement_policy else None) and
equal(self.param('numa_tune_mode'), str(entity.numa_tune_mode)) and
equal(self.param('rng_device'), str(entity.rng_device.source) if entity.rng_device else None)
)
def pre_create(self, entity):
# Mark if entity exists before touching it:
if entity is None:
self._is_new = True
def post_update(self, entity):
self.post_present(entity.id)
def post_present(self, entity_id):
# After creation of the VM, attach disks and NICs:
entity = self._service.service(entity_id).get()
self.__attach_disks(entity)
self.__attach_nics(entity)
self._attach_cd(entity)
self.changed = self.__attach_numa_nodes(entity)
self.changed = self.__attach_watchdog(entity)
self.changed = self.__attach_graphical_console(entity)
self.changed = self.__attach_host_devices(entity)
def pre_remove(self, entity):
# Forcibly stop the VM, if it's not in DOWN state:
if entity.status != otypes.VmStatus.DOWN:
if not self._module.check_mode:
self.changed = self.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)['changed']
def __suspend_shutdown_common(self, vm_service):
if vm_service.get().status in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]:
self._wait_for_UP(vm_service)
def _pre_shutdown_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.SUSPENDED, otypes.VmStatus.PAUSED]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _pre_suspend_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.PAUSED, otypes.VmStatus.DOWN]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _post_start_action(self, entity):
vm_service = self._service.service(entity.id)
self._wait_for_UP(vm_service)
self._attach_cd(vm_service.get())
def _attach_cd(self, entity):
cd_iso = self.param('cd_iso')
if cd_iso is not None:
vm_service = self._service.service(entity.id)
current = vm_service.get().status == otypes.VmStatus.UP and self.param('state') == 'running'
cdroms_service = vm_service.cdroms_service()
cdrom_device = cdroms_service.list()[0]
cdrom_service = cdroms_service.cdrom_service(cdrom_device.id)
cdrom = cdrom_service.get(current=current)
if getattr(cdrom.file, 'id', '') != cd_iso:
if not self._module.check_mode:
cdrom_service.update(
cdrom=otypes.Cdrom(
file=otypes.File(id=cd_iso)
),
current=current,
)
self.changed = True
return entity
def _migrate_vm(self, entity):
vm_host = self.param('host')
vm_service = self._service.vm_service(entity.id)
# In case VM is preparing to be UP, wait to be up, to migrate it:
if entity.status == otypes.VmStatus.UP:
if vm_host is not None:
hosts_service = self._connection.system_service().hosts_service()
current_vm_host = hosts_service.host_service(entity.host.id).get().name
if vm_host != current_vm_host:
if not self._module.check_mode:
vm_service.migrate(host=otypes.Host(name=vm_host), force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
elif self.param('migrate'):
if not self._module.check_mode:
vm_service.migrate(force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
return entity
def _wait_for_UP(self, vm_service):
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def _wait_for_vm_disks(self, vm_service):
disks_service = self._connection.system_service().disks_service()
for da in vm_service.disk_attachments_service().list():
disk_service = disks_service.disk_service(da.disk.id)
wait(
service=disk_service,
condition=lambda disk: disk.status == otypes.DiskStatus.OK if disk.storage_type == otypes.DiskStorageType.IMAGE else True,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def wait_for_down(self, vm):
"""
This function will first wait for the status DOWN of the VM.
Then it will find the active snapshot and wait until it's state is OK for
stateless VMs and stateless snapshot is removed.
"""
vm_service = self._service.vm_service(vm.id)
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
if vm.stateless:
snapshots_service = vm_service.snapshots_service()
snapshots = snapshots_service.list()
snap_active = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.ACTIVE
][0]
snap_stateless = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.STATELESS
]
# Stateless snapshot may be already removed:
if snap_stateless:
"""
We need to wait for Active snapshot ID, to be removed as it's current
stateless snapshot. Then we need to wait for staless snapshot ID to
be read, for use, because it will become active snapshot.
"""
wait(
service=snapshots_service.snapshot_service(snap_active.id),
condition=lambda snap: snap is None,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
wait(
service=snapshots_service.snapshot_service(snap_stateless[0].id),
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
return True
def __attach_graphical_console(self, entity):
graphical_console = self.param('graphical_console')
if not graphical_console:
return False
vm_service = self._service.service(entity.id)
gcs_service = vm_service.graphics_consoles_service()
graphical_consoles = gcs_service.list()
# Remove all graphical consoles if there are any:
if bool(graphical_console.get('headless_mode')):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
return len(graphical_consoles) > 0
# If there are not gc add any gc to be added:
protocol = graphical_console.get('protocol')
if isinstance(protocol, str):
protocol = [protocol]
current_protocols = [str(gc.protocol) for gc in graphical_consoles]
if not current_protocols:
if not self._module.check_mode:
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
# Update consoles:
if sorted(protocol) != sorted(current_protocols):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
def __attach_disks(self, entity):
if not self.param('disks'):
return
vm_service = self._service.service(entity.id)
disks_service = self._connection.system_service().disks_service()
disk_attachments_service = vm_service.disk_attachments_service()
self._wait_for_vm_disks(vm_service)
for disk in self.param('disks'):
# If disk ID is not specified, find disk by name:
disk_id = disk.get('id')
if disk_id is None:
disk_id = getattr(
search_by_name(
service=disks_service,
name=disk.get('name')
),
'id',
None
)
# Attach disk to VM:
disk_attachment = disk_attachments_service.attachment_service(disk_id)
if get_entity(disk_attachment) is None:
if not self._module.check_mode:
disk_attachments_service.add(
otypes.DiskAttachment(
disk=otypes.Disk(
id=disk_id,
),
active=disk.get('activate', True),
interface=otypes.DiskInterface(
disk.get('interface', 'virtio')
),
bootable=disk.get('bootable', False),
)
)
self.changed = True
def __get_vnic_profile_id(self, nic):
"""
Return VNIC profile ID looked up by it's name, because there can be
more VNIC profiles with same name, other criteria of filter is cluster.
"""
vnics_service = self._connection.system_service().vnic_profiles_service()
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
profiles = [
profile for profile in vnics_service.list()
if profile.name == nic.get('profile_name')
]
cluster_networks = [
net.id for net in self._connection.follow_link(cluster.networks)
]
try:
return next(
profile.id for profile in profiles
if profile.network.id in cluster_networks
)
except StopIteration:
raise Exception(
"Profile '%s' was not found in cluster '%s'" % (
nic.get('profile_name'),
self.param('cluster')
)
)
def __attach_numa_nodes(self, entity):
updated = False
numa_nodes_service = self._service.service(entity.id).numa_nodes_service()
if len(self.param('numa_nodes')) > 0:
# Remove all existing virtual numa nodes before adding new ones
existed_numa_nodes = numa_nodes_service.list()
existed_numa_nodes.sort(reverse=len(existed_numa_nodes) > 1 and existed_numa_nodes[1].index > existed_numa_nodes[0].index)
for current_numa_node in existed_numa_nodes:
numa_nodes_service.node_service(current_numa_node.id).remove()
updated = True
for numa_node in self.param('numa_nodes'):
if numa_node is None or numa_node.get('index') is None or numa_node.get('cores') is None or numa_node.get('memory') is None:
continue
numa_nodes_service.add(
otypes.VirtualNumaNode(
index=numa_node.get('index'),
memory=numa_node.get('memory'),
cpu=otypes.Cpu(
cores=[
otypes.Core(
index=core
) for core in numa_node.get('cores')
],
),
numa_node_pins=[
otypes.NumaNodePin(
index=pin
) for pin in numa_node.get('numa_node_pins')
] if numa_node.get('numa_node_pins') is not None else None,
)
)
updated = True
return updated
def __attach_watchdog(self, entity):
watchdogs_service = self._service.service(entity.id).watchdogs_service()
watchdog = self.param('watchdog')
if watchdog is not None:
current_watchdog = next(iter(watchdogs_service.list()), None)
if watchdog.get('model') is None and current_watchdog:
watchdogs_service.watchdog_service(current_watchdog.id).remove()
return True
elif watchdog.get('model') is not None and current_watchdog is None:
watchdogs_service.add(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model').lower()),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
elif current_watchdog is not None:
if (
str(current_watchdog.model).lower() != watchdog.get('model').lower() or
str(current_watchdog.action).lower() != watchdog.get('action').lower()
):
watchdogs_service.watchdog_service(current_watchdog.id).update(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model')),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
return False
def __attach_nics(self, entity):
# Attach NICs to VM, if specified:
nics_service = self._service.service(entity.id).nics_service()
for nic in self.param('nics'):
if search_by_name(nics_service, nic.get('name')) is None:
if not self._module.check_mode:
nics_service.add(
otypes.Nic(
name=nic.get('name'),
interface=otypes.NicInterface(
nic.get('interface', 'virtio')
),
vnic_profile=otypes.VnicProfile(
id=self.__get_vnic_profile_id(nic),
) if nic.get('profile_name') else None,
mac=otypes.Mac(
address=nic.get('mac_address')
) if nic.get('mac_address') else None,
)
)
self.changed = True
def get_initialization(self):
if self._initialization is not None:
return self._initialization
sysprep = self.param('sysprep')
cloud_init = self.param('cloud_init')
cloud_init_nics = self.param('cloud_init_nics') or []
if cloud_init is not None:
cloud_init_nics.append(cloud_init)
if cloud_init or cloud_init_nics:
self._initialization = otypes.Initialization(
nic_configurations=[
otypes.NicConfiguration(
boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol').lower()
) if nic.get('nic_boot_protocol') else None,
ipv6_boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol_v6').lower()
) if nic.get('nic_boot_protocol_v6') else None,
name=nic.pop('nic_name', None),
on_boot=nic.pop('nic_on_boot', None),
ip=otypes.Ip(
address=nic.pop('nic_ip_address', None),
netmask=nic.pop('nic_netmask', None),
gateway=nic.pop('nic_gateway', None),
version=otypes.IpVersion('v4')
) if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None
) else None,
ipv6=otypes.Ip(
address=nic.pop('nic_ip_address_v6', None),
netmask=nic.pop('nic_netmask_v6', None),
gateway=nic.pop('nic_gateway_v6', None),
version=otypes.IpVersion('v6')
) if (
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_ip_address_v6') is not None
) else None,
)
for nic in cloud_init_nics
if (
nic.get('nic_boot_protocol_v6') is not None or
nic.get('nic_ip_address_v6') is not None or
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None or
nic.get('nic_boot_protocol') is not None or
nic.get('nic_on_boot') is not None
)
] if cloud_init_nics else None,
**cloud_init
)
elif sysprep:
self._initialization = otypes.Initialization(
**sysprep
)
return self._initialization
def __attach_host_devices(self, entity):
vm_service = self._service.service(entity.id)
host_devices_service = vm_service.host_devices_service()
host_devices = self.param('host_devices')
updated = False
if host_devices:
device_names = [dev.name for dev in host_devices_service.list()]
for device in host_devices:
device_name = device.get('name')
state = device.get('state', 'present')
if state == 'absent' and device_name in device_names:
updated = True
if not self._module.check_mode:
device_id = get_id_by_name(host_devices_service, device.get('name'))
host_devices_service.device_service(device_id).remove()
elif state == 'present' and device_name not in device_names:
updated = True
if not self._module.check_mode:
host_devices_service.add(
otypes.HostDevice(
name=device.get('name'),
)
)
return updated
def _get_role_mappings(module):
roleMappings = list()
for roleMapping in module.params['role_mappings']:
roleMappings.append(
otypes.RegistrationRoleMapping(
from_=otypes.Role(
name=roleMapping['source_name'],
) if roleMapping['source_name'] else None,
to=otypes.Role(
name=roleMapping['dest_name'],
) if roleMapping['dest_name'] else None,
)
)
return roleMappings
def _get_affinity_group_mappings(module):
affinityGroupMappings = list()
for affinityGroupMapping in module.params['affinity_group_mappings']:
affinityGroupMappings.append(
otypes.RegistrationAffinityGroupMapping(
from_=otypes.AffinityGroup(
name=affinityGroupMapping['source_name'],
) if affinityGroupMapping['source_name'] else None,
to=otypes.AffinityGroup(
name=affinityGroupMapping['dest_name'],
) if affinityGroupMapping['dest_name'] else None,
)
)
return affinityGroupMappings
def _get_affinity_label_mappings(module):
affinityLabelMappings = list()
for affinityLabelMapping in module.params['affinity_label_mappings']:
affinityLabelMappings.append(
otypes.RegistrationAffinityLabelMapping(
from_=otypes.AffinityLabel(
name=affinityLabelMapping['source_name'],
) if affinityLabelMapping['source_name'] else None,
to=otypes.AffinityLabel(
name=affinityLabelMapping['dest_name'],
) if affinityLabelMapping['dest_name'] else None,
)
)
return affinityLabelMappings
def _get_domain_mappings(module):
domainMappings = list()
for domainMapping in module.params['domain_mappings']:
domainMappings.append(
otypes.RegistrationDomainMapping(
from_=otypes.Domain(
name=domainMapping['source_name'],
) if domainMapping['source_name'] else None,
to=otypes.Domain(
name=domainMapping['dest_name'],
) if domainMapping['dest_name'] else None,
)
)
return domainMappings
def _get_lun_mappings(module):
lunMappings = list()
for lunMapping in module.params['lun_mappings']:
lunMappings.append(
otypes.RegistrationLunMapping(
from_=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['source_storage_type'])
if (lunMapping['source_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['source_logical_unit_id'],
)
],
),
) if lunMapping['source_logical_unit_id'] else None,
to=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['dest_storage_type'])
if (lunMapping['dest_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['dest_logical_unit_id'],
port=lunMapping['dest_logical_unit_port'],
portal=lunMapping['dest_logical_unit_portal'],
address=lunMapping['dest_logical_unit_address'],
target=lunMapping['dest_logical_unit_target'],
password=lunMapping['dest_logical_unit_password'],
username=lunMapping['dest_logical_unit_username'],
)
],
),
) if lunMapping['dest_logical_unit_id'] else None,
),
),
return lunMappings
def _get_cluster_mappings(module):
clusterMappings = list()
for clusterMapping in module.params['cluster_mappings']:
clusterMappings.append(
otypes.RegistrationClusterMapping(
from_=otypes.Cluster(
name=clusterMapping['source_name'],
),
to=otypes.Cluster(
name=clusterMapping['dest_name'],
) if clusterMapping['dest_name'] else None,
)
)
return clusterMappings
def _get_vnic_profile_mappings(module):
vnicProfileMappings = list()
for vnicProfileMapping in module.params['vnic_profile_mappings']:
vnicProfileMappings.append(
otypes.VnicProfileMapping(
source_network_name=vnicProfileMapping['source_network_name'],
source_network_profile_name=vnicProfileMapping['source_profile_name'],
target_vnic_profile=otypes.VnicProfile(
id=vnicProfileMapping['target_profile_id'],
) if vnicProfileMapping['target_profile_id'] else None,
)
)
return vnicProfileMappings
def import_vm(module, connection):
vms_service = connection.system_service().vms_service()
if search_by_name(vms_service, module.params['name']) is not None:
return False
events_service = connection.system_service().events_service()
last_event = events_service.list(max=1)[0]
external_type = [
tmp for tmp in ['kvm', 'xen', 'vmware']
if module.params[tmp] is not None
][0]
external_vm = module.params[external_type]
imports_service = connection.system_service().external_vm_imports_service()
imported_vm = imports_service.add(
otypes.ExternalVmImport(
vm=otypes.Vm(
name=module.params['name']
),
name=external_vm.get('name'),
username=external_vm.get('username', 'test'),
password=external_vm.get('password', 'test'),
provider=otypes.ExternalVmProviderType(external_type),
url=external_vm.get('url'),
cluster=otypes.Cluster(
name=module.params['cluster'],
) if module.params['cluster'] else None,
storage_domain=otypes.StorageDomain(
name=external_vm.get('storage_domain'),
) if external_vm.get('storage_domain') else None,
sparse=external_vm.get('sparse', True),
host=otypes.Host(
name=module.params['host'],
) if module.params['host'] else None,
)
)
# Wait until event with code 1152 for our VM don't appear:
vms_service = connection.system_service().vms_service()
wait(
service=vms_service.vm_service(imported_vm.vm.id),
condition=lambda vm: len([
event
for event in events_service.list(
from_=int(last_event.id),
search='type=1152 and vm.id=%s' % vm.id,
)
]) > 0 if vm is not None else False,
fail_condition=lambda vm: vm is None,
timeout=module.params['timeout'],
poll_interval=module.params['poll_interval'],
)
return True
def check_deprecated_params(module, connection):
if engine_supported(connection, '4.4') and \
(module.params.get('kernel_params_persist') is not None or
module.params.get('kernel_path') is not None or
module.params.get('initrd_path') is not None or
module.params.get('kernel_params') is not None):
module.warn("Parameters 'kernel_params_persist', 'kernel_path', 'initrd_path', 'kernel_params' are not supported since oVirt 4.4.")
def control_state(vm, vms_service, module):
if vm is None:
return
force = module.params['force']
state = module.params['state']
vm_service = vms_service.vm_service(vm.id)
if vm.status == otypes.VmStatus.IMAGE_LOCKED:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
elif vm.status == otypes.VmStatus.SAVING_STATE:
# Result state is SUSPENDED, we should wait to be suspended:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif (
vm.status == otypes.VmStatus.UNASSIGNED or
vm.status == otypes.VmStatus.UNKNOWN
):
# Invalid states:
module.fail_json(msg="Not possible to control VM, if it's in '{0}' status".format(vm.status))
elif vm.status == otypes.VmStatus.POWERING_DOWN:
if (force and state == 'stopped') or state == 'absent':
vm_service.stop()
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
else:
# If VM is powering down, wait to be DOWN or UP.
# VM can end in UP state in case there is no GA
# or ACPI on the VM or shutdown operation crashed:
wait(
service=vm_service,
condition=lambda vm: vm.status in [otypes.VmStatus.DOWN, otypes.VmStatus.UP],
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(type='str', default='present', choices=['absent', 'next_run', 'present', 'registered', 'running', 'stopped', 'suspended', 'exported']),
name=dict(type='str'),
id=dict(type='str'),
cluster=dict(type='str'),
allow_partial_import=dict(type='bool'),
template=dict(type='str'),
template_version=dict(type='int'),
use_latest_template_version=dict(type='bool'),
storage_domain=dict(type='str'),
disk_format=dict(type='str', default='cow', choices=['cow', 'raw']),
disks=dict(type='list', default=[]),
memory=dict(type='str'),
memory_guaranteed=dict(type='str'),
memory_max=dict(type='str'),
cpu_sockets=dict(type='int'),
cpu_cores=dict(type='int'),
cpu_shares=dict(type='int'),
cpu_threads=dict(type='int'),
type=dict(type='str', choices=['server', 'desktop', 'high_performance']),
operating_system=dict(type='str'),
cd_iso=dict(type='str'),
boot_devices=dict(type='list', choices=['cdrom', 'hd', 'network']),
vnic_profile_mappings=dict(default=[], type='list'),
cluster_mappings=dict(default=[], type='list'),
role_mappings=dict(default=[], type='list'),
affinity_group_mappings=dict(default=[], type='list'),
affinity_label_mappings=dict(default=[], type='list'),
lun_mappings=dict(default=[], type='list'),
domain_mappings=dict(default=[], type='list'),
reassign_bad_macs=dict(default=None, type='bool'),
boot_menu=dict(type='bool'),
serial_console=dict(type='bool'),
usb_support=dict(type='bool'),
sso=dict(type='bool'),
quota_id=dict(type='str'),
high_availability=dict(type='bool'),
high_availability_priority=dict(type='int'),
lease=dict(type='str'),
stateless=dict(type='bool'),
delete_protected=dict(type='bool'),
custom_emulated_machine=dict(type='str'),
force=dict(type='bool', default=False),
nics=dict(type='list', default=[]),
cloud_init=dict(type='dict'),
cloud_init_nics=dict(type='list', default=[]),
cloud_init_persist=dict(type='bool', default=False, aliases=['sysprep_persist']),
kernel_params_persist=dict(type='bool', default=False),
sysprep=dict(type='dict'),
host=dict(type='str'),
clone=dict(type='bool', default=False),
clone_permissions=dict(type='bool', default=False),
kernel_path=dict(type='str'),
initrd_path=dict(type='str'),
kernel_params=dict(type='str'),
instance_type=dict(type='str'),
description=dict(type='str'),
comment=dict(type='str'),
timezone=dict(type='str'),
serial_policy=dict(type='str', choices=['vm', 'host', 'custom']),
serial_policy_value=dict(type='str'),
vmware=dict(type='dict'),
xen=dict(type='dict'),
kvm=dict(type='dict'),
cpu_mode=dict(type='str'),
placement_policy=dict(type='str'),
custom_compatibility_version=dict(type='str'),
ticket=dict(type='bool', default=None),
cpu_pinning=dict(type='list'),
soundcard_enabled=dict(type='bool', default=None),
smartcard_enabled=dict(type='bool', default=None),
io_threads=dict(type='int', default=None),
ballooning_enabled=dict(type='bool', default=None),
rng_device=dict(type='str'),
numa_tune_mode=dict(type='str', choices=['interleave', 'preferred', 'strict']),
numa_nodes=dict(type='list', default=[]),
custom_properties=dict(type='list'),
watchdog=dict(type='dict'),
host_devices=dict(type='list'),
graphical_console=dict(type='dict'),
exclusive=dict(type='bool'),
export_domain=dict(default=None),
export_ova=dict(type='dict'),
force_migrate=dict(type='bool'),
migrate=dict(type='bool', default=None),
next_run=dict(type='bool'),
snapshot_name=dict(type='str'),
snapshot_vm=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['id', 'name']],
required_if=[
('state', 'registered', ['storage_domain']),
],
required_together=[['snapshot_name', 'snapshot_vm']]
)
check_sdk(module)
check_params(module)
try:
state = module.params['state']
auth = module.params.pop('auth')
connection = create_connection(auth)
check_deprecated_params(module, connection)
vms_service = connection.system_service().vms_service()
vms_module = VmsModule(
connection=connection,
module=module,
service=vms_service,
)
vm = vms_module.search_entity(list_params={'all_content': True})
# Boolean variable to mark if vm existed before module was executed
vm_existed = True if vm else False
control_state(vm, vms_service, module)
if state in ('present', 'running', 'next_run'):
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
# In case of wait=false and state=running, waits for VM to be created
# In case VM don't exist, wait for VM DOWN state,
# otherwise don't wait for any state, just update VM:
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
update_params={'next_run': module.params['next_run']} if module.params['next_run'] is not None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
_wait=True if not module.params['wait'] and state == 'running' else module.params['wait'],
)
# If VM is going to be created and check_mode is on, return now:
if module.check_mode and ret.get('id') is None:
module.exit_json(**ret)
vms_module.post_present(ret['id'])
# Run the VM if it was just created, else don't run it:
if state == 'running':
def kernel_persist_check():
return (module.params.get('kernel_params') or
module.params.get('initrd_path') or
module.params.get('kernel_path')
and not module.params.get('cloud_init_persist'))
initialization = vms_module.get_initialization()
ret = vms_module.action(
action='start',
post_action=vms_module._post_start_action,
action_condition=lambda vm: (
vm.status not in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]
),
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
# Start action kwargs:
use_cloud_init=True if not module.params.get('cloud_init_persist') and module.params.get('cloud_init') else None,
use_sysprep=True if not module.params.get('cloud_init_persist') and module.params.get('sysprep') else None,
vm=otypes.Vm(
placement_policy=otypes.VmPlacementPolicy(
hosts=[otypes.Host(name=module.params['host'])]
) if module.params['host'] else None,
initialization=initialization,
os=otypes.OperatingSystem(
cmdline=module.params.get('kernel_params'),
initrd=module.params.get('initrd_path'),
kernel=module.params.get('kernel_path'),
) if (kernel_persist_check()) else None,
) if (
kernel_persist_check() or
module.params.get('host') or
initialization is not None
and not module.params.get('cloud_init_persist')
) else None,
)
if module.params['ticket']:
vm_service = vms_service.vm_service(ret['id'])
graphics_consoles_service = vm_service.graphics_consoles_service()
graphics_console = graphics_consoles_service.list()[0]
console_service = graphics_consoles_service.console_service(graphics_console.id)
ticket = console_service.remote_viewer_connection_file()
if ticket:
ret['vm']['remote_vv_file'] = ticket
if state == 'next_run':
# Apply next run configuration, if needed:
vm = vms_service.vm_service(ret['id']).get()
if vm.next_run_configuration_exists:
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
# Allow migrate vm when state present.
if vm_existed:
vms_module._migrate_vm(vm)
ret['changed'] = vms_module.changed
elif state == 'stopped':
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
if module.params['force']:
ret = vms_module.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
else:
ret = vms_module.action(
action='shutdown',
pre_action=vms_module._pre_shutdown_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
vms_module.post_present(ret['id'])
elif state == 'suspended':
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
vms_module.post_present(ret['id'])
ret = vms_module.action(
action='suspend',
pre_action=vms_module._pre_suspend_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.SUSPENDED,
wait_condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif state == 'absent':
ret = vms_module.remove()
elif state == 'registered':
storage_domains_service = connection.system_service().storage_domains_service()
# Find the storage domain with unregistered VM:
sd_id = get_id_by_name(storage_domains_service, module.params['storage_domain'])
storage_domain_service = storage_domains_service.storage_domain_service(sd_id)
vms_service = storage_domain_service.vms_service()
# Find the unregistered VM we want to register:
vms = vms_service.list(unregistered=True)
vm = next(
(vm for vm in vms if (vm.id == module.params['id'] or vm.name == module.params['name'])),
None
)
changed = False
if vm is None:
vm = vms_module.search_entity()
if vm is None:
raise ValueError(
"VM '%s(%s)' wasn't found." % (module.params['name'], module.params['id'])
)
else:
# Register the vm into the system:
changed = True
vm_service = vms_service.vm_service(vm.id)
vm_service.register(
allow_partial_import=module.params['allow_partial_import'],
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
vnic_profile_mappings=_get_vnic_profile_mappings(module)
if module.params['vnic_profile_mappings'] else None,
reassign_bad_macs=module.params['reassign_bad_macs']
if module.params['reassign_bad_macs'] is not None else None,
registration_configuration=otypes.RegistrationConfiguration(
cluster_mappings=_get_cluster_mappings(module),
role_mappings=_get_role_mappings(module),
domain_mappings=_get_domain_mappings(module),
lun_mappings=_get_lun_mappings(module),
affinity_group_mappings=_get_affinity_group_mappings(module),
affinity_label_mappings=_get_affinity_label_mappings(module),
) if (module.params['cluster_mappings']
or module.params['role_mappings']
or module.params['domain_mappings']
or module.params['lun_mappings']
or module.params['affinity_group_mappings']
or module.params['affinity_label_mappings']) else None
)
if module.params['wait']:
vm = vms_module.wait_for_import()
else:
# Fetch vm to initialize return.
vm = vm_service.get()
ret = {
'changed': changed,
'id': vm.id,
'vm': get_dict_of_struct(vm)
}
elif state == 'exported':
if module.params['export_domain']:
export_service = vms_module._get_export_domain_service()
export_vm = search_by_attributes(export_service.vms_service(), id=vm.id)
ret = vms_module.action(
entity=vm,
action='export',
action_condition=lambda t: export_vm is None or module.params['exclusive'],
wait_condition=lambda t: t is not None,
post_action=vms_module.post_export_action,
storage_domain=otypes.StorageDomain(id=export_service.get().id),
exclusive=module.params['exclusive'],
)
elif module.params['export_ova']:
export_vm = module.params['export_ova']
ret = vms_module.action(
entity=vm,
action='export_to_path_on_host',
host=otypes.Host(name=export_vm.get('host')),
directory=export_vm.get('directory'),
filename=export_vm.get('filename'),
)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,789 |
VMware: Move helper functions from vmware_host_config_manager to vmware module
|
##### SUMMARY
Move some helper functions from `vmware_host_config_manager` to `PyVmomi`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_host_config_manager
vmware
##### ADDITIONAL INFORMATION
`vmware_host_config_manager` has some helper functions (`is_integer`, `is_boolean` and `is_truthy`) to deal with vSphere Web Services API `OptionValue`s. Today, I started to work on issue #61421 and I think these helper functions could be useful there, too. Actually, there are several Managed Object Types with an `OptionValue[]` parameter. Instead of duplicating the code, I think it would be better to move these helper functions to `PyVmomi` in `module_utils/vmware.py`.
|
https://github.com/ansible/ansible/issues/62789
|
https://github.com/ansible/ansible/pull/62801
|
ad580a71c475b570cdbd4c79aaf0082ecccdf5fc
|
d01035ef2532b93793048b7a350c3223ce200643
| 2019-09-24T13:44:43Z |
python
| 2019-09-30T20:55:36Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.basic import env_fallback, missing_required_lib
from ansible.module_utils.urls import generic_urlparse
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter:
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name):
return find_object_by_name(content, datastore_name, [vim.Datastore])
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, network_name, [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
elif validate_certs:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
elif hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
ssl_context.check_hostname = False
else: # Python < 2.7.9 or RHEL/Centos < 7.4
ssl_context = None
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if 'uuid' in self.params and self.params['uuid']:
if not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif 'name' in self.params and self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == self.params['name']:
vms.append(temp_vm_object.obj)
break
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif 'moid' in self.params and self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def is_datastore_valid(self, datastore_obj=None):
"""
Check if datastore selected is valid or not
Args:
datastore_obj: datastore managed object
Returns: True if datastore is valid, False if not
"""
if not datastore_obj \
or datastore_obj.summary.maintenanceMode != 'normal' \
or not datastore_obj.summary.accessible:
return False
return True
def find_datastore_by_name(self, datastore_name):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,789 |
VMware: Move helper functions from vmware_host_config_manager to vmware module
|
##### SUMMARY
Move some helper functions from `vmware_host_config_manager` to `PyVmomi`.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_host_config_manager
vmware
##### ADDITIONAL INFORMATION
`vmware_host_config_manager` has some helper functions (`is_integer`, `is_boolean` and `is_truthy`) to deal with vSphere Web Services API `OptionValue`s. Today, I started to work on issue #61421 and I think these helper functions could be useful there, too. Actually, there are several Managed Object Types with an `OptionValue[]` parameter. Instead of duplicating the code, I think it would be better to move these helper functions to `PyVmomi` in `module_utils/vmware.py`.
|
https://github.com/ansible/ansible/issues/62789
|
https://github.com/ansible/ansible/pull/62801
|
ad580a71c475b570cdbd4c79aaf0082ecccdf5fc
|
d01035ef2532b93793048b7a350c3223ce200643
| 2019-09-24T13:44:43Z |
python
| 2019-09-30T20:55:36Z |
lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_config_manager
short_description: Manage advanced system settings of an ESXi host
description:
- This module can be used to manage advanced system settings of an ESXi host when ESXi hostname or Cluster name is given.
version_added: '2.5'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
cluster_name:
description:
- Name of the cluster.
- Settings are applied to every ESXi host in given cluster.
- If C(esxi_hostname) is not given, this parameter is required.
type: str
esxi_hostname:
description:
- ESXi hostname.
- Settings are applied to this ESXi host.
- If C(cluster_name) is not given, this parameter is required.
type: str
options:
description:
- A dictionary of advanced system settings.
- Invalid options will cause module to error.
- Note that the list of advanced options (with description and values) can be found by running `vim-cmd hostsvc/advopt/options`.
default: {}
type: dict
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Manage Log level setting for all ESXi hosts in given Cluster
vmware_host_config_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: cluster_name
options:
'Config.HostAgent.log.level': 'info'
delegate_to: localhost
- name: Manage Log level setting for an ESXi host
vmware_host_config_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
options:
'Config.HostAgent.log.level': 'verbose'
delegate_to: localhost
- name: Manage multiple settings for an ESXi host
vmware_host_config_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
options:
'Config.HostAgent.log.level': 'verbose'
'Annotations.WelcomeMessage': 'Hello World'
'Config.HostAgent.plugins.solo.enableMob': false
delegate_to: localhost
'''
RETURN = r'''#
'''
try:
from pyVmomi import vim, vmodl, VmomiSupport
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
from ansible.module_utils._text import to_native
from ansible.module_utils.six import integer_types, string_types
class VmwareConfigManager(PyVmomi):
def __init__(self, module):
super(VmwareConfigManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.options = self.params.get('options', dict())
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
@staticmethod
def is_integer(value, type_of='int'):
try:
VmomiSupport.vmodlTypes[type_of](value)
return True
except (TypeError, ValueError):
return False
@staticmethod
def is_boolean(value):
if str(value).lower() in ['true', 'on', 'yes', 'false', 'off', 'no']:
return True
return False
@staticmethod
def is_truthy(value):
if str(value).lower() in ['true', 'on', 'yes']:
return True
return False
def set_host_configuration_facts(self):
changed_list = []
message = ''
for host in self.hosts:
option_manager = host.configManager.advancedOption
host_facts = {}
for s_option in option_manager.supportedOption:
host_facts[s_option.key] = dict(option_type=s_option.optionType, value=None)
for option in option_manager.QueryOptions():
if option.key in host_facts:
host_facts[option.key].update(
value=option.value,
)
change_option_list = []
for option_key, option_value in self.options.items():
if option_key in host_facts:
# We handle all supported types here so we can give meaningful errors.
option_type = host_facts[option_key]['option_type']
if self.is_boolean(option_value) and isinstance(option_type, vim.option.BoolOption):
option_value = self.is_truthy(option_value)
elif (isinstance(option_value, integer_types) or self.is_integer(option_value))\
and isinstance(option_type, vim.option.IntOption):
option_value = VmomiSupport.vmodlTypes['int'](option_value)
elif (isinstance(option_value, integer_types) or self.is_integer(option_value, 'long'))\
and isinstance(option_type, vim.option.LongOption):
option_value = VmomiSupport.vmodlTypes['long'](option_value)
elif isinstance(option_value, float) and isinstance(option_type, vim.option.FloatOption):
pass
elif isinstance(option_value, string_types) and isinstance(option_type, (vim.option.StringOption, vim.option.ChoiceOption)):
pass
else:
self.module.fail_json(msg="Provided value is of type %s."
" Option %s expects: %s" % (type(option_value), option_key, type(option_type)))
if option_value != host_facts[option_key]['value']:
change_option_list.append(vim.option.OptionValue(key=option_key, value=option_value))
changed_list.append(option_key)
else: # Don't silently drop unknown options. This prevents typos from falling through the cracks.
self.module.fail_json(msg="Unsupported option %s" % option_key)
if change_option_list:
if self.module.check_mode:
changed_suffix = ' would be changed.'
else:
changed_suffix = ' changed.'
if len(changed_list) > 2:
message = ', '.join(changed_list[:-1]) + ', and ' + str(changed_list[-1])
elif len(changed_list) == 2:
message = ' and '.join(changed_list)
elif len(changed_list) == 1:
message = changed_list[0]
message += changed_suffix
if self.module.check_mode is False:
try:
option_manager.UpdateOptions(changedValue=change_option_list)
except (vmodl.fault.SystemError, vmodl.fault.InvalidArgument) as e:
self.module.fail_json(msg="Failed to update option/s as one or more OptionValue "
"contains an invalid value: %s" % to_native(e.msg))
except vim.fault.InvalidName as e:
self.module.fail_json(msg="Failed to update option/s as one or more OptionValue "
"objects refers to a non-existent option : %s" % to_native(e.msg))
else:
message = 'All settings are already configured.'
self.module.exit_json(changed=bool(changed_list), msg=message)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
options=dict(type='dict', default=dict(), required=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[
['cluster_name', 'esxi_hostname'],
]
)
vmware_host_config = VmwareConfigManager(module)
vmware_host_config.set_host_configuration_facts()
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,237 |
aireos modules unable to handle binary data from controller
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Cisco WLC release 8.8.120.0 returns non-ascii values with a `show run-config commands` command causing tasks and modules using this command to fail.
The binary data appears to happen in the same spot in the output from the controller and looks like this in the prompt:
```
wlan flexconnect learn-ipaddr 97 enable
wlan flexconnect learn-ipaddr 101 enable
`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R
wlan wgb broadcast-tagging disable 1
wlan wgb broadcast-tagging disable 2 `
```
There are a number of bytes that represent the binary data, one sample is: `\xc8\x92\xef\xbf\xbdR\x7f`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
aireos_command
aireos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = True
DEFAULT_FORKS(/Users/wsmith/temp/network_backup/ansible.cfg) = 20
DEFAULT_GATHERING(/Users/wsmith/temp/network_backup/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory.ini']
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory_plugins']
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
DEFAULT_LOG_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/ansible.log
DEFAULT_ROLES_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/roles']
DEFAULT_VAULT_PASSWORD_FILE(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/vault_password.txt
HOST_KEY_CHECKING(/Users/wsmith/temp/network_backup/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/ansible/venv/bin/python
INVENTORY_ENABLED(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'ini']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Cisco WLC running AireOS with code version 8.8.120.0
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-playbook` against playbook with task that will execute the `show run-config commands` command.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: WLCs
vars:
ansible_connection: local
ansible_network_os: aireos
ansible_user: "user"
ansible_password: "password"
tasks:
- name: Get current running configuration
aireos_command:
commands:
- show run-config commands
provider:
timeout: 90
register: output
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A successful task execution and the running configuration retrieved from the WLC.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
An `AnsibleConnectionFailure` exception is raised.
<!--- Paste verbatim command output between quotes -->
```paste below
2019-08-07 13:59:25,451 p=wsmith u=16872 | ansible-playbook 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
2019-08-07 13:59:25,451 p=wsmith u=16872 | Using /Users/wsmith/temp/network_backup/ansible.cfg as config file
2019-08-07 13:59:25,452 p=wsmith u=16872 | setting up inventory plugins
2019-08-07 13:59:25,463 p=wsmith u=16872 | Parsed /Users/wsmith/temp/network_backup/inventory.ini inventory source with ini plugin
2019-08-07 13:59:25,832 p=wsmith u=16872 | Loading callback plugin default of type stdout, v2.0 from /Users/wsmith/temp/ansible/lib/ansible/plugins/callback/default.pyc
2019-08-07 13:59:25,875 p=wsmith u=16872 | PLAYBOOK: pb.yml *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,875 p=wsmith u=16872 | 1 plays in pb.yml
2019-08-07 13:59:25,879 p=wsmith u=16872 | PLAY [WLCs] *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,888 p=wsmith u=16872 | META: ran handlers
2019-08-07 13:59:25,894 p=wsmith u=16872 | TASK [Get current running configuration] ****************************************************************************************************************************************************************
2019-08-07 13:59:25,992 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,012 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,033 p=wsmith u=16880 | <1.1.1.1> using connection plugin network_cli (was local)
2019-08-07 13:59:26,034 p=wsmith u=16880 | <1.1.1.1> starting connection from persistent connection plugin
2019-08-07 13:59:26,404 p=wsmith u=16886 | <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,447 p=wsmith u=16880 | <1.1.1.1> local domain socket does not exist, starting it
2019-08-07 13:59:28,449 p=wsmith u=16880 | <1.1.1.1> control socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,450 p=wsmith u=16880 | <1.1.1.1> <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,451 p=wsmith u=16880 | <1.1.1.1> connection to remote device started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> local domain socket listeners started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> loaded cliconf plugin for network_os aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | network_os is set to aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | <1.1.1.1> ssh connection done, setting terminal
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> loaded terminal plugin for network_os aireos
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> firing event: on_open_shell()
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1> ssh connection has completed successfully
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1>
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> local domain socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> socket_path: /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,460 p=wsmith u=16880 | <1.1.1.1> ESTABLISH LOCAL CONNECTION FOR USER: wsmith
2019-08-07 13:59:28,461 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'echo ~wsmith && sleep 0'
2019-08-07 13:59:28,477 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" && echo ansible-tmp-1565211568.46-135266596630451="` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" ) && sleep 0'
2019-08-07 13:59:28,670 p=wsmith u=16880 | Using module file /Users/wsmith/temp/ansible/lib/ansible/modules/network/aireos/aireos_command.py
2019-08-07 13:59:28,672 p=wsmith u=16880 | <1.1.1.1> PUT /Users/wsmith/.ansible/tmp/ansible-local-16872CPCpFb/tmpaJTRIR TO /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py
2019-08-07 13:59:28,674 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'chmod u+x /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 13:59:28,693 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '/Users/wsmith/temp/ansible/venv/bin/python /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 14:00:08,812 p=wsmith u=16886 | Traceback (most recent call last):
File "/Users/wsmith/temp/ansible/lib/ansible/utils/jsonrpc.py", line 45, in handle_request
result = rpc_method(*args, **kwargs)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 287, in exec_command
return self.send(command=cmd)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 473, in send
response = self.receive(command, prompt, answer, newline, prompt_retry_check, check_all)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 444, in receive
if self._find_prompt(window):
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 590, in _find_prompt
raise AnsibleConnectionFailure(errored_response)
AnsibleConnectionFailure: {u'answer': None, u'prompt': None, u'command': u'show run-config commands'}
Incorrect usage. Use the '?' or <TAB> key to list commands.
(WLC01) >
2019-08-07 14:00:08,827 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'rm -f -r /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ > /dev/null 2>&1 && sleep 0'
2019-08-07 14:00:08,854 p=wsmith u=16872 | fatal: [WLC01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show run-config commands"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": {
"host": null,
"password": null,
"port": null,
"ssh_keyfile": null,
"timeout": 90,
"username": null
},
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"username": null,
"wait_for": null
}
},
"msg": "{u'answer': None, u'prompt': None, u'command': u'show run-config commands'}\r\n\r\nIncorrect usage. Use the '?' or <TAB> key to list commands.\r\n\r\n(WLC01) >",
"rc": -32603
}
2019-08-07 14:00:08,926 p=wsmith u=16872 | PLAY RECAP **********************************************************************************************************************************************************************************************
2019-08-07 14:00:08,927 p=wsmith u=16872 | WLC01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
2019-08-07 14:00:09,025 p=wsmith u=16886 | shutdown complete
```
|
https://github.com/ansible/ansible/issues/60237
|
https://github.com/ansible/ansible/pull/60243
|
30cc54da8cc4efd7a554df964b0f7bddfde0c9c4
|
50d1cbd30a8e19bcce312bd429acb4b690255f51
| 2019-08-07T21:07:19Z |
python
| 2019-10-01T20:52:35Z |
lib/ansible/modules/network/aireos/aireos_command.py
|
#!/usr/bin/python
#
# Copyright: Ansible Team
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: aireos_command
version_added: "2.4"
author: "James Mighion (@jmighion)"
short_description: Run commands on remote devices running Cisco WLC
description:
- Sends arbitrary commands to an aireos node and returns the results
read from the device. This module includes an
argument that will cause the module to wait for a specific condition
before returning or timing out if the condition is not met.
- Commands run in configuration mode with this module are not
idempotent. Please use M(aireos_config) to configure WLC devices.
extends_documentation_fragment: aireos
options:
commands:
description:
- List of commands to send to the remote aireos device over the
configured provider. The resulting output from the command
is returned. If the I(wait_for) argument is provided, the
module is not returned until the condition is satisfied or
the number of retries has expired.
required: true
wait_for:
description:
- List of conditions to evaluate against the output of the
command. The task will wait for each condition to be true
before moving forward. If the conditional is not true
within the configured number of retries, the task fails.
See examples.
aliases: ['waitfor']
match:
description:
- The I(match) argument is used in conjunction with the
I(wait_for) argument to specify the match policy. Valid
values are C(all) or C(any). If the value is set to C(all)
then all conditionals in the wait_for must be satisfied. If
the value is set to C(any) then only one of the values must be
satisfied.
default: all
choices: ['any', 'all']
retries:
description:
- Specifies the number of retries a command should by tried
before it is considered failed. The command is run on the
target device every retry and evaluated against the
I(wait_for) conditions.
default: 10
interval:
description:
- Configures the interval in seconds to wait between retries
of the command. If the command does not pass the specified
conditions, the interval indicates how long to wait before
trying the command again.
default: 1
"""
EXAMPLES = """
tasks:
- name: run show sysinfo on remote devices
aireos_command:
commands: show sysinfo
- name: run show sysinfo and check to see if output contains Cisco Controller
aireos_command:
commands: show sysinfo
wait_for: result[0] contains 'Cisco Controller'
- name: run multiple commands on remote nodes
aireos_command:
commands:
- show sysinfo
- show interface summary
- name: run multiple commands and evaluate the output
aireos_command:
commands:
- show sysinfo
- show interface summary
wait_for:
- result[0] contains Cisco Controller
- result[1] contains Loopback0
"""
RETURN = """
stdout:
description: The set of responses from the commands
returned: always apart from low level errors (such as action plugin)
type: list
sample: ['...', '...']
stdout_lines:
description: The value of stdout split into a list
returned: always apart from low level errors (such as action plugin)
type: list
sample: [['...', '...'], ['...'], ['...']]
failed_conditions:
description: The list of conditionals that have failed
returned: failed
type: list
sample: ['...', '...']
"""
import time
from ansible.module_utils.network.aireos.aireos import run_commands
from ansible.module_utils.network.aireos.aireos import aireos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.common.utils import ComplexList
from ansible.module_utils.network.common.parsing import Conditional
from ansible.module_utils.six import string_types
def to_lines(stdout):
for item in stdout:
if isinstance(item, string_types):
item = str(item).split('\n')
yield item
def parse_commands(module, warnings):
command = ComplexList(dict(
command=dict(key=True),
prompt=dict(),
answer=dict()
), module)
commands = command(module.params['commands'])
for index, item in enumerate(commands):
if module.check_mode and not item['command'].startswith('show'):
warnings.append(
'only show commands are supported when using check mode, not '
'executing `%s`' % item['command']
)
elif item['command'].startswith('conf'):
warnings.append(
'commands run in config mode with aireos_command are not '
'idempotent. Please use aireos_config instead'
)
return commands
def main():
"""main entry point for module execution
"""
argument_spec = dict(
commands=dict(type='list', required=True),
wait_for=dict(type='list', aliases=['waitfor']),
match=dict(default='all', choices=['all', 'any']),
retries=dict(default=10, type='int'),
interval=dict(default=1, type='int')
)
argument_spec.update(aireos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
result = {'changed': False}
warnings = list()
check_args(module, warnings)
commands = parse_commands(module, warnings)
result['warnings'] = warnings
wait_for = module.params['wait_for'] or list()
conditionals = [Conditional(c) for c in wait_for]
retries = module.params['retries']
interval = module.params['interval']
match = module.params['match']
while retries > 0:
responses = run_commands(module, commands)
for item in list(conditionals):
if item(responses):
if match == 'any':
conditionals = list()
break
conditionals.remove(item)
if not conditionals:
break
time.sleep(interval)
retries -= 1
if conditionals:
failed_conditions = [item.raw for item in conditionals]
msg = 'One or more conditional statements have not been satisfied'
module.fail_json(msg=msg, failed_conditions=failed_conditions)
result.update({
'changed': False,
'stdout': responses,
'stdout_lines': list(to_lines(responses))
})
module.exit_json(**result)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.